modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 12:27:51
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 520
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 12:25:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
SakataHalmi/q-Taxi-v3y | SakataHalmi | 2023-09-20T15:46:11Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-20T15:46:08Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3y
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="SakataHalmi/q-Taxi-v3y", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
MohanaPriyaa/image_classification | MohanaPriyaa | 2023-09-20T15:43:27Z | 63 | 1 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-20T14:06:18Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: MohanaPriyaa/image_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MohanaPriyaa/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2925
- Validation Loss: 0.2284
- Train Accuracy: 0.909
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2925 | 0.2284 | 0.909 | 0 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
tonystark0/bert-finetuned-ner | tonystark0 | 2023-09-20T15:42:25Z | 120 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-20T15:30:39Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9302325581395349
- name: Recall
type: recall
value: 0.9491753618310333
- name: F1
type: f1
value: 0.9396084964598085
- name: Accuracy
type: accuracy
value: 0.9858715488314593
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0609
- Precision: 0.9302
- Recall: 0.9492
- F1: 0.9396
- Accuracy: 0.9859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0788 | 1.0 | 1756 | 0.0763 | 0.9123 | 0.9337 | 0.9229 | 0.9800 |
| 0.0393 | 2.0 | 3512 | 0.0605 | 0.9262 | 0.9480 | 0.9370 | 0.9855 |
| 0.0255 | 3.0 | 5268 | 0.0609 | 0.9302 | 0.9492 | 0.9396 | 0.9859 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
signon-project/mbart-large-cc25-ft-amr30-es | signon-project | 2023-09-20T15:34:44Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-cc25",
"base_model:finetune:facebook/mbart-large-cc25",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-01-19T11:50:36Z | ---
base_model: facebook/mbart-large-cc25
tags:
- generated_from_trainer
model-index:
- name: es+no_processing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# es+no_processing
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5874
- Smatch Precision: 74.08
- Smatch Recall: 76.84
- Smatch Fscore: 75.44
- Smatch Unparsable: 0
- Percent Not Recoverable: 0.2323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Smatch Precision | Smatch Recall | Smatch Fscore | Smatch Unparsable | Percent Not Recoverable |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:-------------:|:-----------------:|:-----------------------:|
| 0.3908 | 1.0 | 3477 | 1.4300 | 19.74 | 68.95 | 30.7 | 0 | 0.0 |
| 0.256 | 2.0 | 6954 | 0.8998 | 27.75 | 70.61 | 39.85 | 1 | 0.0581 |
| 0.0704 | 3.0 | 10431 | 0.8727 | 30.09 | 72.2 | 42.47 | 0 | 0.1161 |
| 0.0586 | 4.0 | 13908 | 0.7774 | 37.1 | 74.93 | 49.62 | 0 | 0.1161 |
| 0.1059 | 5.0 | 17385 | 0.6322 | 42.52 | 74.54 | 54.15 | 1 | 0.1161 |
| 0.0424 | 6.0 | 20862 | 0.6090 | 47.13 | 76.21 | 58.25 | 0 | 0.0 |
| 0.0139 | 7.0 | 24339 | 0.5768 | 48.3 | 77.31 | 59.46 | 0 | 0.0581 |
| 0.08 | 8.0 | 27817 | 0.5608 | 55.74 | 77.16 | 64.72 | 0 | 0.1161 |
| 0.0224 | 9.0 | 31294 | 0.5937 | 54.91 | 77.02 | 64.11 | 0 | 0.0581 |
| 0.0757 | 10.0 | 34771 | 0.5588 | 59.53 | 77.47 | 67.32 | 0 | 0.0581 |
| 0.0613 | 11.0 | 38248 | 0.5894 | 60.83 | 77.82 | 68.28 | 0 | 0.0581 |
| 0.1045 | 12.0 | 41725 | 0.5847 | 61.23 | 77.17 | 68.28 | 0 | 0.1742 |
| 0.012 | 13.0 | 45202 | 0.5588 | 65.61 | 77.47 | 71.05 | 0 | 0.0 |
| 0.0591 | 14.0 | 48679 | 0.5609 | 66.51 | 77.86 | 71.74 | 0 | 0.0581 |
| 0.0252 | 15.0 | 52156 | 0.5653 | 67.48 | 77.75 | 72.25 | 0 | 0.0 |
| 0.0129 | 16.0 | 55634 | 0.5602 | 68.92 | 77.57 | 72.99 | 0 | 0.0 |
| 0.0006 | 17.0 | 59111 | 0.5876 | 68.57 | 77.81 | 72.9 | 0 | 0.1742 |
| 0.0182 | 18.0 | 62588 | 0.5951 | 68.97 | 77.96 | 73.19 | 0 | 0.1161 |
| 0.018 | 19.0 | 66065 | 0.5865 | 70.63 | 77.68 | 73.98 | 0 | 0.0581 |
| 0.0097 | 20.0 | 69542 | 0.6073 | 71.68 | 77.38 | 74.42 | 0 | 0.1161 |
| 0.0021 | 21.0 | 73019 | 0.5984 | 72.25 | 77.92 | 74.98 | 0 | 0.0581 |
| 0.0371 | 22.0 | 76496 | 0.5907 | 72.92 | 77.59 | 75.18 | 0 | 0.1742 |
| 0.0382 | 23.0 | 79973 | 0.5928 | 73.06 | 77.49 | 75.21 | 0 | 0.1742 |
| 0.0148 | 24.0 | 83451 | 0.5903 | 73.98 | 77.15 | 75.53 | 0 | 0.0581 |
| 0.1326 | 25.0 | 86925 | 0.5874 | 74.08 | 76.84 | 75.44 | 0 | 0.2323 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
Sudhee1997/Llama-2-7b-Custom-Recruit | Sudhee1997 | 2023-09-20T15:33:14Z | 5 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-15T11:25:00Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
willyninja30/llama-2-7b-chat-hf-fr-en-python | willyninja30 | 2023-09-20T15:33:02Z | 5 | 0 | peft | [
"peft",
"text-generation",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
]
| text-generation | 2023-08-25T10:16:03Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
inference: true
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0 |
CyberHarem/perusepone2shi_jashinchandropkick | CyberHarem | 2023-09-20T15:32:55Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/perusepone2shi_jashinchandropkick",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-20T15:01:25Z | ---
license: mit
datasets:
- CyberHarem/perusepone2shi_jashinchandropkick
pipeline_tag: text-to-image
tags:
- art
---
# Lora of perusepone2shi_jashinchandropkick
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4080, you need to download `4080/perusepone2shi_jashinchandropkick.pt` as the embedding and `4080/perusepone2shi_jashinchandropkick.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4080**, with the score of 0.851. The trigger words are:
1. `perusepone2shi_jashinchandropkick`
2. `short_hair, pointy_ears, hair_over_one_eye, grey_hair, red_eyes, white_hair, ribbon, smile`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.840 | [Download](5100/perusepone2shi_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.844 | [Download](4760/perusepone2shi_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.786 | [Download](4420/perusepone2shi_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| **4080** | **0.851** | [**Download**](4080/perusepone2shi_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.652 | [Download](3740/perusepone2shi_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.654 | [Download](3400/perusepone2shi_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.645 | [Download](3060/perusepone2shi_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.806 | [Download](2720/perusepone2shi_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.756 | [Download](2380/perusepone2shi_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.674 | [Download](2040/perusepone2shi_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.688 | [Download](1700/perusepone2shi_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.629 | [Download](1360/perusepone2shi_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.470 | [Download](1020/perusepone2shi_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.382 | [Download](680/perusepone2shi_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.097 | [Download](340/perusepone2shi_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
Faradaylab/Aria_7b_v2 | Faradaylab | 2023-09-20T15:30:57Z | 7 | 3 | peft | [
"peft",
"llama7b",
"LLAMA2",
"opensource",
"culture",
"code",
"text-generation",
"fr",
"en",
"es",
"dataset:Snit/french-conversation",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
]
| text-generation | 2023-08-25T12:59:18Z | ---
language:
- fr
- en
- es
license: llama2
library_name: peft
tags:
- llama7b
- LLAMA2
- peft
- opensource
- culture
- code
datasets:
- Snit/french-conversation
inference: true
pipeline_tag: text-generation
base_model: meta-llama/Llama-2-7b-chat-hf
---
ARIA 7B V2 is a model created by Faraday 🇫🇷 🇧🇪
The growing need of artificial intelligence tools around the world has created a run for GPU power. We decided to create an affordable model with better skills in French which can run on single GPU and reduce data bias observed in models trained mostly on english only datasets..
ARIA 7B has been trained on over 20.000 tokens of a high quality french dataset. ARIA 7B is one of the best open source models in the world avaible for this size of parameters.
## Training procedure : NVIDIA A100. Thanks to NVIDIA GPU and Inception program,we have been able to train our model within less than 24 hours.
## Base model : LLAMA_2-7B-CHAT-HF
We strongly believe that training models in more languages datasets can not only increase their knowledge base but also give more open analysis perspectives ,less focused visions and opinions from only one part of the world.
## Contact
[email protected]
## Number of Epoch : 2
## Timing : Less than 24 hours
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0 |
willyninja30/aria7Beta | willyninja30 | 2023-09-20T15:30:47Z | 8 | 1 | peft | [
"peft",
"llama2",
"pytorch",
"french",
"text-generation",
"fr",
"en",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"license:apache-2.0",
"region:us"
]
| text-generation | 2023-08-23T14:37:35Z | ---
language:
- fr
- en
license: apache-2.0
library_name: peft
tags:
- llama2
- pytorch
- french
inference: false
pipeline_tag: text-generation
base_model: meta-llama/Llama-2-7b-chat-hf
---
## ARIA 7B is a model created by Faraday
The growing need of artificial intelligence tools around the world has created a run for GPU power. We decided to create an affordable model with better skills in French which can run on single GPU and reduce data bias observed in models trained mostly on english only datasets..
ARIA 7B has been trained on over 20.000 tokens of a high quality french dataset. ARIA 7B is one of the best open source models in the world avaible for this size of parameters.
GPU used for training : NVIDIA V100. Thanks to NVIDIA GPU and Inception program,we have been able to train our model within less than 24 hours.
Base model : LLAMA_2-7B-CHAT-HF
We strongly believe that training models in more languages datasets can not only increase their knowledge base but also give more open analysis perspectives ,less focused visions and opinions from only one part of the world.
## ARIA 7B est un modèle créé par Faraday
Le besoin croissant en intelligence artificiele dans le monde a créé une course vers la puissance de calcul des cartes graphiques.
Nous avons décidé de créer un modèle accessible capable de tourner sur une seule carte graphique et réduisant les biais d'algorithmes observés sur les modèles entrainés uniquement sur des bases de données en anglais.
ARIA 7B a été entrainé sur un dataset de grande qualité avec plus de 20.000 tokens en Français.
GPU(Carte graphique) utilisée pour le finetuning: NVIDIA V100. Merci à NVIDIA et au programme Nvidia Inception qui nous a orienté pendant tout le processus et nous a permis d'entrainer le modèle en moins de 24h.
Modèle de base : LLAMA_2-7B-CHAT-HF
Nous pensons que le fait d'entraîner des modèles sur des langues différentes permet non seulement d'élargir la base de connaissance mais aussi de donner d'autres perspectives d'analyses plus ouvertes,et moins centrées sur la vision et les opinions exprimées par une seule partie du monde.
Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0 |
SakataHalmi/q-FrozenLake-v1-4x4-noSlippery | SakataHalmi | 2023-09-20T15:30:22Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-20T15:30:19Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="SakataHalmi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
BramVanroy/mbart-large-cc25-ft-amr30-en | BramVanroy | 2023-09-20T15:28:41Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-cc25",
"base_model:finetune:facebook/mbart-large-cc25",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-20T15:26:13Z | ---
base_model: facebook/mbart-large-cc25
tags:
- generated_from_trainer
model-index:
- name: en+no_processing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en+no_processing
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4481
- Smatch Precision: 80.57
- Smatch Recall: 83.81
- Smatch Fscore: 82.16
- Smatch Unparsable: 0
- Percent Not Recoverable: 0.3484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Smatch Precision | Smatch Recall | Smatch Fscore | Smatch Unparsable | Percent Not Recoverable |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:-------------:|:-----------------:|:-----------------------:|
| 0.3471 | 1.0 | 3477 | 1.4889 | 22.35 | 73.05 | 34.23 | 0 | 0.1161 |
| 0.1741 | 2.0 | 6954 | 0.8681 | 30.1 | 71.92 | 42.44 | 0 | 0.1161 |
| 0.1296 | 3.0 | 10431 | 0.7081 | 38.6 | 78.68 | 51.8 | 0 | 0.0581 |
| 0.1308 | 4.0 | 13908 | 0.9546 | 37.49 | 78.23 | 50.69 | 0 | 0.0 |
| 0.2213 | 5.0 | 17385 | 0.5544 | 47.63 | 81.17 | 60.03 | 0 | 0.0 |
| 0.0317 | 6.0 | 20862 | 0.4884 | 49.3 | 80.9 | 61.27 | 0 | 0.0 |
| 0.1007 | 7.0 | 24339 | 0.4763 | 54.88 | 82.09 | 65.78 | 0 | 0.0 |
| 0.092 | 8.0 | 27817 | 0.4444 | 57.37 | 83.2 | 67.91 | 0 | 0.0 |
| 0.1051 | 9.0 | 31294 | 0.4192 | 64.37 | 83.81 | 72.82 | 0 | 0.0 |
| 0.0079 | 10.0 | 34771 | 0.4685 | 61.3 | 83.1 | 70.55 | 0 | 0.0 |
| 0.0211 | 11.0 | 38248 | 0.4389 | 63.36 | 84.57 | 72.44 | 0 | 0.1161 |
| 0.1122 | 12.0 | 41725 | 0.4146 | 69.39 | 83.56 | 75.82 | 0 | 0.0581 |
| 0.0183 | 13.0 | 45202 | 0.4003 | 73.9 | 83.71 | 78.5 | 0 | 0.0 |
| 0.0244 | 14.0 | 48679 | 0.4208 | 73.79 | 83.92 | 78.53 | 0 | 0.1161 |
| 0.0116 | 15.0 | 52156 | 0.4248 | 73.88 | 83.85 | 78.55 | 0 | 0.1161 |
| 0.0357 | 16.0 | 55634 | 0.4235 | 75.78 | 84.08 | 79.71 | 0 | 0.1161 |
| 0.0006 | 17.0 | 59111 | 0.4181 | 76.15 | 84.15 | 79.95 | 0 | 0.0581 |
| 0.0329 | 18.0 | 62588 | 0.4494 | 77.21 | 84.12 | 80.52 | 0 | 0.0 |
| 0.0003 | 19.0 | 66065 | 0.4389 | 78.02 | 84.13 | 80.96 | 0 | 0.0 |
| 0.04 | 20.0 | 69542 | 0.4439 | 78.78 | 84.23 | 81.41 | 0 | 0.0 |
| 0.0182 | 21.0 | 73019 | 0.4430 | 79.82 | 84.05 | 81.88 | 0 | 0.0581 |
| 0.0006 | 22.0 | 76496 | 0.4488 | 79.96 | 83.74 | 81.81 | 0 | 0.0581 |
| 0.0074 | 23.0 | 79973 | 0.4569 | 79.84 | 83.85 | 81.79 | 0 | 0.0581 |
| 0.0133 | 24.0 | 83451 | 0.4469 | 80.45 | 83.81 | 82.09 | 0 | 0.2904 |
| 0.0055 | 25.0 | 86925 | 0.4481 | 80.57 | 83.81 | 82.16 | 0 | 0.3484 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
csakarwa/Model1cs | csakarwa | 2023-09-20T15:24:47Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-20T15:24:45Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
AnatolyBelov/my_t5_small_test | AnatolyBelov | 2023-09-20T15:19:33Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-18T14:00:07Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: my_t5_small_test
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 6.372
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_t5_small_test
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5026
- Bleu: 6.372
- Gen Len: 17.5713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.7108 | 1.0 | 6355 | 1.5113 | 6.3012 | 17.5701 |
| 1.6833 | 2.0 | 12710 | 1.5026 | 6.372 | 17.5713 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
abhishek23HF/MARKEtING_BLOOMZ_1B | abhishek23HF | 2023-09-20T15:17:28Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-20T15:17:23Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
databio/v2v-ChIP-atlas-hg38-ATAC | databio | 2023-09-20T15:15:34Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-09-19T20:50:13Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Vec2Vec ChIP-atlas hg38
## Model Details
### Model Description
This is a Vec2Vec model that encodes embedding vectors of natural language into embedding vectors of BED files. This model was trained with hg38 ChIP-atlas ATAC-seq data. The natural language metadata came from the experiment list, their embedding vectors were encoded by [sentence-transformers](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2). The BED files were embedded by [Region2Vec](https://huggingface.co/databio/r2v-ChIP-atlas-hg38)
- **Developed by:** Ziyang "Claude" Hu
- **Model type:** Vec2Vec
- **Language(s) (NLP):** hg38
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/databio/geniml
- **Paper [optional]:** N/A
## Uses
This model can be used to search BED files with natural language query strings. In the search interface, the query strings will be encoded by same sentence-transformers model, and the output vector will be encoded into the final query vector by this Vec2Vec. The K BED files whose embedding vectors (embedded by same Region2Vec) are closest to the final query vector are results. It is limited to hg38. It is not recommended to use this model for data outside ATAC-seq.
## How to Get Started with the Model
Vec2Vec will allow direct importing from Hugging Face soon.
[More Information Needed]
## Training Details
### Training Data
TODO |
ramchiluveru/MarketingCampaign | ramchiluveru | 2023-09-20T15:14:23Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-20T15:14:21Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
badassbandit/taxi | badassbandit | 2023-09-20T15:10:46Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-20T15:10:45Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="badassbandit/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CyberHarem/matsuo_chizuru_idolmastercinderellagirls | CyberHarem | 2023-09-20T15:09:30Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/matsuo_chizuru_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-20T14:52:03Z | ---
license: mit
datasets:
- CyberHarem/matsuo_chizuru_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of matsuo_chizuru_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4760, you need to download `4760/matsuo_chizuru_idolmastercinderellagirls.pt` as the embedding and `4760/matsuo_chizuru_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4760**, with the score of 0.996. The trigger words are:
1. `matsuo_chizuru_idolmastercinderellagirls`
2. `short_hair, black_hair, blush, hair_ornament, hairclip, black_eyes, thick_eyebrows, purple_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.983 | [Download](5100/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](5100/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| **4760** | **0.996** | [**Download**](4760/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4760/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.968 | [Download](4420/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4420/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.994 | [Download](4080/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4080/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.960 | [Download](3740/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3740/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.994 | [Download](3400/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3400/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.992 | [Download](3060/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3060/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.982 | [Download](2720/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2720/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.995 | [Download](2380/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2380/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.959 | [Download](2040/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2040/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.961 | [Download](1700/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1700/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.924 | [Download](1360/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1360/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.872 | [Download](1020/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1020/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.880 | [Download](680/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](680/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.313 | [Download](340/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](340/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
badassbandit/q-FrozenLake-v1-4x4-noSlippery | badassbandit | 2023-09-20T15:06:16Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-20T15:06:14Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="badassbandit/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BanUrsus/rl_course_additional_challenge_vizdoom_deathmatch_bots | BanUrsus | 2023-09-20T14:56:40Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-20T14:56:24Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_deathmatch_bots
type: doom_deathmatch_bots
metrics:
- type: mean_reward
value: 1.70 +/- 1.42
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_deathmatch_bots** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r BanUrsus/rl_course_additional_challenge_vizdoom_deathmatch_bots
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_deathmatch_bots --train_dir=./train_dir --experiment=rl_course_additional_challenge_vizdoom_deathmatch_bots
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_deathmatch_bots --train_dir=./train_dir --experiment=rl_course_additional_challenge_vizdoom_deathmatch_bots --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
hosnasn/tannaz1-reza | hosnasn | 2023-09-20T14:54:02Z | 0 | 0 | null | [
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-20T14:47:47Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### tannaz1_reza Dreambooth model trained by hosnasn with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
csdc-atl/Baichuan2-13B-Chat-GPTQ-Int4 | csdc-atl | 2023-09-20T14:50:13Z | 170 | 2 | transformers | [
"transformers",
"safetensors",
"baichuan",
"text-generation",
"custom_code",
"en",
"zh",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2023-09-15T18:06:52Z | ---
language:
- en
- zh
license: other
tasks:
- text-generation
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
# Baichuan 2 13B Chat - Int4
<!-- description start -->
## 描述
该repo包含[Baichuan 2 7B Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat)的Int4 GPTQ模型文件。
<!-- description end -->
<!-- README_GPTQ.md-provided-files start -->
## GPTQ参数
该GPTQ文件都是用AutoGPTQ生成的。
- Bits: 4/8
- GS: 32/128
- Act Order: True
- Damp %: 0.1
- GPTQ dataset: 中文、英文混合数据集
- Sequence Length: 4096
| 模型版本 | agieval | ceval | cmmlu | size | 推理速度(A100-40G) |
|---|---|---|---|---|---|
| [Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | 40.25 | 56.33 | 58.44 | 27.79g | 31.55 tokens/s |
| [Baichuan2-13B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits) | 39.01 | 56.63 | 57.81 | 9.08g | 18.45 tokens/s |
| [GPTQ-4bit-32g](https://huggingface.co/csdc-atl/Baichuan2-13B-Chat-GPTQ-Int4/tree/4bit-32g) | 38.64 | 57.18 | 57.47 | 9.87g | 27.35(hf) \ 38.28(autogptq) tokens/s |
| [GPTQ-4bit-128g](https://huggingface.co/csdc-atl/Baichuan2-13B-Chat-GPTQ-Int4/tree/main) | 38.78 | 56.42 | 57.78 | 9.14g | 28.74(hf) \ 39.24(autogptq) tokens/s |
<!-- README_GPTQ.md-provided-files end -->
## 如何在Python代码中使用此GPTQ模型
### 安装必要的依赖
必须: Transformers 4.32.0以上、Optimum 1.12.0以上、AutoGPTQ 0.4.2以上
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
如果您在使用预构建的pip包安装AutoGPTQ时遇到问题,请改为从源代码安装:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### 然后可以使用以下代码
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation.utils import GenerationConfig
model_name_or_path = "csdc-atl/Baichuan2-13B-Chat-Int4"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True, trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained("baichuan-inc/Baichuan2-7B-Chat")
messages = []
messages.append({"role": "user", "content": "解释一下“温故而知新”"})
response = model.chat(tokenizer, messages)
print(response)
"温故而知新"是一句中国古代的成语,出自《论语·为政》篇。这句话的意思是:通过回顾过去,我们可以发现新的知识和理解。换句话说,学习历史和经验可以让我们更好地理解现在和未来。
这句话鼓励我们在学习和生活中不断地回顾和反思过去的经验,从而获得新的启示和成长。通过重温旧的知识和经历,我们可以发现新的观点和理解,从而更好地应对不断变化的世界和挑战。
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
---
<div align="center">
<h1>
Baichuan 2
</h1>
</div>
<div align="center">
<a href="https://github.com/baichuan-inc/Baichuan2" target="_blank">🦉GitHub</a> | <a href="https://github.com/baichuan-inc/Baichuan-7B/blob/main/media/wechat.jpeg?raw=true" target="_blank">💬WeChat</a>
</div>
<div align="center">
🚀 <a href="https://www.baichuan-ai.com/" target="_blank">百川大模型在线对话平台</a> 已正式向公众开放 🎉
</div>
# 目录/Table of Contents
- [📖 模型介绍/Introduction](#Introduction)
- [⚙️ 快速开始/Quick Start](#Start)
- [📊 Benchmark评估/Benchmark Evaluation](#Benchmark)
- [📜 声明与协议/Terms and Conditions](#Terms)
# <span id="Introduction">模型介绍/Introduction</span>
Baichuan 2 是[百川智能]推出的新一代开源大语言模型,采用 **2.6 万亿** Tokens 的高质量语料训练,在权威的中文和英文 benchmark
上均取得同尺寸最好的效果。本次发布包含有 7B、13B 的 Base 和 Chat 版本,并提供了 Chat 版本的 4bits
量化,所有版本不仅对学术研究完全开放,开发者也仅需[邮件申请]并获得官方商用许可后,即可以免费商用。具体发布版本和下载见下表:
Baichuan 2 is the new generation of large-scale open-source language models launched by [Baichuan Intelligence inc.](https://www.baichuan-ai.com/).
It is trained on a high-quality corpus with 2.6 trillion tokens and has achieved the best performance in authoritative Chinese and English benchmarks of the same size.
This release includes 7B and 13B versions for both Base and Chat models, along with a 4bits quantized version for the Chat model.
All versions are fully open to academic research, and developers can also use them for free in commercial applications after obtaining an official commercial license through [email request](mailto:[email protected]).
The specific release versions and download links are listed in the table below:
| | Base Model | Chat Model | 4bits Quantized Chat Model |
|:---:|:--------------------:|:--------------------:|:--------------------------:|
| 7B | [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) | [Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) | [Baichuan2-7B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base-4bits) |
| 13B | [Baichuan2-13B-Base](https://huggingface.co/baichuan-inc/Baichuan2-13B-Base) | [Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | [Baichuan2-13B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits) |
# <span id="Start">快速开始/Quick Start</span>
在Baichuan2系列模型中,我们为了加快推理速度使用了Pytorch2.0加入的新功能F.scaled_dot_product_attention,因此模型需要在Pytorch2.0环境下运行。
In the Baichuan 2 series models, we have utilized the new feature `F.scaled_dot_product_attention` introduced in PyTorch 2.0 to accelerate inference speed. Therefore, the model needs to be run in a PyTorch 2.0 environment.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation.utils import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan2-13B-Chat", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan2-13B-Chat", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained("baichuan-inc/Baichuan2-13B-Chat")
messages = []
messages.append({"role": "user", "content": "解释一下“温故而知新”"})
response = model.chat(tokenizer, messages)
print(response)
"温故而知新"是一句中国古代的成语,出自《论语·为政》篇。这句话的意思是:通过回顾过去,我们可以发现新的知识和理解。换句话说,学习历史和经验可以让我们更好地理解现在和未来。
这句话鼓励我们在学习和生活中不断地回顾和反思过去的经验,从而获得新的启示和成长。通过重温旧的知识和经历,我们可以发现新的观点和理解,从而更好地应对不断变化的世界和挑战。
```
# <span id="Benchmark">Benchmark 结果/Benchmark Evaluation</span>
我们在[通用]、[法律]、[医疗]、[数学]、[代码]和[多语言翻译]六个领域的中英文权威数据集上对模型进行了广泛测试,更多详细测评结果可查看[GitHub]。
We have extensively tested the model on authoritative Chinese-English datasets across six domains: [General](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#general-domain), [Legal](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Medical](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Mathematics](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), [Code](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), and [Multilingual Translation](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#multilingual-translation). For more detailed evaluation results, please refer to [GitHub](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md).
### 7B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:-----------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-7B** | 27.10 | 35.10 | 26.75 | 27.81 | 28.17 | 32.38 |
| **LLaMA2-7B** | 28.90 | 45.73 | 31.38 | 25.97 | 26.53 | 39.16 |
| **MPT-7B** | 27.15 | 27.93 | 26.00 | 26.54 | 24.83 | 35.20 |
| **Falcon-7B** | 24.23 | 26.03 | 25.66 | 24.24 | 24.10 | 28.77 |
| **ChatGLM2-6B** | 50.20 | 45.90 | 49.00 | 49.44 | 45.28 | 31.65 |
| **[Baichuan-7B]** | 42.80 | 42.30 | 44.02 | 36.34 | 34.44 | 32.48 |
| **[Baichuan2-7B-Base]** | 54.00 | 54.16 | 57.07 | 47.47 | 42.73 | 41.56 |
### 13B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:---------------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-13B** | 28.50 | 46.30 | 31.15 | 28.23 | 28.22 | 37.89 |
| **LLaMA2-13B** | 35.80 | 55.09 | 37.99 | 30.83 | 32.29 | 46.98 |
| **Vicuna-13B** | 32.80 | 52.00 | 36.28 | 30.11 | 31.55 | 43.04 |
| **Chinese-Alpaca-Plus-13B** | 38.80 | 43.90 | 33.43 | 34.78 | 35.46 | 28.94 |
| **XVERSE-13B** | 53.70 | 55.21 | 58.44 | 44.69 | 42.54 | 38.06 |
| **[Baichuan-13B-Base]** | 52.40 | 51.60 | 55.30 | 49.69 | 43.20 | 43.01 |
| **[Baichuan2-13B-Base]** | 58.10 | 59.17 | 61.97 | 54.33 | 48.17 | 48.78 |
## 训练过程模型/Training Dynamics
除了训练了 2.6 万亿 Tokens 的 [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) 模型,我们还提供了在此之前的另外 11 个中间过程的模型(分别对应训练了约 0.2 ~ 2.4 万亿 Tokens)供社区研究使用
([训练过程checkpoint下载](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints))。下图给出了这些 checkpoints 在 C-Eval、MMLU、CMMLU 三个 benchmark 上的效果变化:
In addition to the [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) model trained on 2.6 trillion tokens, we also offer 11 additional intermediate-stage models for community research, corresponding to training on approximately 0.2 to 2.4 trillion tokens each ([Intermediate Checkpoints Download](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints)). The graph below shows the performance changes of these checkpoints on three benchmarks: C-Eval, MMLU, and CMMLU.

# <span id="Terms">声明与协议/Terms and Conditions</span>
## 声明
我们在此声明,我们的开发团队并未基于 Baichuan 2 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用
Baichuan 2 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan 2
模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用
Baichuan 2 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
We hereby declare that our team has not developed any applications based on Baichuan 2 models, not on iOS, Android, the web, or any other platform. We strongly call on all users not to use Baichuan 2 models for any activities that harm national / social security or violate the law. Also, we ask users not to use Baichuan 2 models for Internet services that have not undergone appropriate security reviews and filings. We hope that all users can abide by this principle and ensure that the development of technology proceeds in a regulated and legal environment.
We have done our best to ensure the compliance of the data used in the model training process. However, despite our considerable efforts, there may still be some unforeseeable issues due to the complexity of the model and data. Therefore, if any problems arise due to the use of Baichuan 2 open-source models, including but not limited to data security issues, public opinion risks, or any risks and problems brought about by the model being misled, abused, spread or improperly exploited, we will not assume any responsibility.
## 协议
Baichuan 2 模型的社区使用需遵循[《Baichuan 2 模型社区许可协议》]。Baichuan 2 支持商用。如果将 Baichuan 2 模型或其衍生品用作商业用途,请您按照如下方式联系许可方,以进行登记并向许可方申请书面授权:联系邮箱 [[email protected]]。
The use of the source code in this repository follows the open-source license Apache 2.0. Community use of the Baichuan 2 model must adhere to the [Community License for Baichuan 2 Model](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf). Baichuan 2 supports commercial use. If you are using the Baichuan 2 models or their derivatives for commercial purposes, please contact the licensor in the following manner for registration and to apply for written authorization: Email [email protected].
[GitHub]:https://github.com/baichuan-inc/Baichuan2
[Baichuan2]:https://github.com/baichuan-inc/Baichuan2
[Baichuan-7B]:https://huggingface.co/baichuan-inc/Baichuan-7B
[Baichuan2-7B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base
[Baichuan2-7B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat
[Baichuan2-7B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat-4bits
[Baichuan-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan-13B-Base
[Baichuan2-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Base
[Baichuan2-13B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat
[Baichuan2-13B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits
[通用]:https://github.com/baichuan-inc/Baichuan2#%E9%80%9A%E7%94%A8%E9%A2%86%E5%9F%9F
[法律]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[医疗]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[数学]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[代码]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[多语言翻译]:https://github.com/baichuan-inc/Baichuan2#%E5%A4%9A%E8%AF%AD%E8%A8%80%E7%BF%BB%E8%AF%91
[《Baichuan 2 模型社区许可协议》]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf
[邮件申请]: mailto:[email protected]
[Email]: mailto:[email protected]
[[email protected]]: mailto:[email protected]
[训练过程heckpoint下载]: https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints
[百川智能]: https://www.baichuan-ai.com
|
OpenDILabCommunity/Lunarlander-v2-C51 | OpenDILabCommunity | 2023-09-20T14:49:22Z | 0 | 0 | pytorch | [
"pytorch",
"deep-reinforcement-learning",
"reinforcement-learning",
"DI-engine",
"LunarLander-v2",
"en",
"license:apache-2.0",
"region:us"
]
| reinforcement-learning | 2023-04-15T12:48:19Z | ---
language: en
license: apache-2.0
library_name: pytorch
tags:
- deep-reinforcement-learning
- reinforcement-learning
- DI-engine
- LunarLander-v2
benchmark_name: OpenAI/Gym/Box2d
task_name: LunarLander-v2
pipeline_tag: reinforcement-learning
model-index:
- name: C51
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: OpenAI/Gym/Box2d-LunarLander-v2
type: OpenAI/Gym/Box2d-LunarLander-v2
metrics:
- type: mean_reward
value: 211.75 +/- 40.32
name: mean_reward
---
# Play **LunarLander-v2** with **C51** Policy
## Model Description
<!-- Provide a longer summary of what this model is. -->
This is a simple **C51** implementation to OpenAI/Gym/Box2d **LunarLander-v2** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo).
**DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework.
## Model Usage
### Install the Dependencies
<details close>
<summary>(Click for Details)</summary>
```shell
# install huggingface_ding
git clone https://github.com/opendilab/huggingface_ding.git
pip3 install -e ./huggingface_ding/
# install environment dependencies if needed
pip3 install DI-engine[common_env]
```
</details>
### Git Clone from Huggingface and Run the Model
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import C51Agent
from ding.config import Config
from easydict import EasyDict
import torch
# Pull model from files which are git cloned from huggingface
policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict)
# Instantiate the agent
agent = C51Agent(
env_id="LunarLander-v2", exp_name="LunarLander-v2-C51", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
### Run Model by Using Huggingface_ding
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import C51Agent
from huggingface_ding import pull_model_from_hub
# Pull model from Hugggingface hub
policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/LunarLander-v2-C51")
# Instantiate the agent
agent = C51Agent(
env_id="LunarLander-v2", exp_name="LunarLander-v2-C51", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
## Model Training
### Train the Model and Push to Huggingface_hub
<details close>
<summary>(Click for Details)</summary>
```shell
#Training Your Own Agent
python3 -u train.py
```
**train.py**
```python
from ding.bonus import C51Agent
from huggingface_ding import push_model_to_hub
# Instantiate the agent
agent = C51Agent(env_id="LunarLander-v2", exp_name="LunarLander-v2-C51")
# Train the agent
return_ = agent.train(step=int(4000000), collector_env_num=8, evaluator_env_num=8, debug=False)
# Push model to huggingface hub
push_model_to_hub(
agent=agent.best,
env_name="OpenAI/Gym/Box2d",
task_name="LunarLander-v2",
algo_name="C51",
wandb_url=return_.wandb_url,
github_repo_url="https://github.com/opendilab/DI-engine",
github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/c51.html",
github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/lunarlander.html",
installation_guide="pip3 install DI-engine[common_env]",
usage_file_by_git_clone="./c51/lunarlander_c51_deploy.py",
usage_file_by_huggingface_ding="./c51/lunarlander_c51_download.py",
train_file="./c51/lunarlander_c51.py",
repo_id="OpenDILabCommunity/LunarLander-v2-C51",
create_repo=False
)
```
</details>
**Configuration**
<details close>
<summary>(Click for Details)</summary>
```python
exp_config = {
'env': {
'manager': {
'episode_num': float("inf"),
'max_retry': 1,
'retry_type': 'reset',
'auto_reset': True,
'step_timeout': None,
'reset_timeout': None,
'retry_waiting_time': 0.1,
'cfg_type': 'BaseEnvManagerDict'
},
'stop_value': 260,
'n_evaluator_episode': 8,
'collector_env_num': 8,
'evaluator_env_num': 8,
'env_id': 'LunarLander-v2'
},
'policy': {
'model': {
'encoder_hidden_size_list': [512, 64],
'v_min': -30,
'v_max': 30,
'n_atom': 51,
'obs_shape': 8,
'action_shape': 4
},
'learn': {
'learner': {
'train_iterations': 1000000000,
'dataloader': {
'num_workers': 0
},
'log_policy': True,
'hook': {
'load_ckpt_before_run': '',
'log_show_after_iter': 100,
'save_ckpt_after_iter': 10000,
'save_ckpt_after_run': True
},
'cfg_type': 'BaseLearnerDict'
},
'update_per_collect': 10,
'batch_size': 64,
'learning_rate': 0.001,
'target_update_freq': 100,
'target_theta': 0.005,
'ignore_done': False
},
'collect': {
'collector': {},
'n_sample': 64,
'unroll_len': 1
},
'eval': {
'evaluator': {
'eval_freq': 1000,
'render': {
'render_freq': -1,
'mode': 'train_iter'
},
'figure_path': None,
'cfg_type': 'InteractionSerialEvaluatorDict',
'stop_value': 260,
'n_episode': 8
}
},
'other': {
'replay_buffer': {
'replay_buffer_size': 100000
},
'eps': {
'type': 'exp',
'start': 0.95,
'end': 0.1,
'decay': 50000
}
},
'on_policy': False,
'cuda': False,
'multi_gpu': False,
'bp_update_sync': True,
'traj_len_inf': False,
'type': 'c51',
'priority': False,
'priority_IS_weight': False,
'discount_factor': 0.99,
'nstep': 3,
'cfg_type': 'C51PolicyDict'
},
'exp_name': 'LunarLander-v2-C51',
'seed': 0,
'wandb_logger': {
'gradient_logger': True,
'video_logger': True,
'plot_logger': True,
'action_logger': True,
'return_logger': False
}
}
```
</details>
**Training Procedure**
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/Lunarlander-v2-C51)
## Model Information
<!-- Provide the basic links for the model. -->
- **Github Repository:** [repo link](https://github.com/opendilab/DI-engine)
- **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/c51.html)
- **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/LunarLander-v2-C51/blob/main/policy_config.py)
- **Demo:** [video](https://huggingface.co/OpenDILabCommunity/LunarLander-v2-C51/blob/main/replay.mp4)
<!-- Provide the size information for the model. -->
- **Parameters total size:** 214.3 KB
- **Last Update Date:** 2023-09-20
## Environments
<!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
- **Benchmark:** OpenAI/Gym/Box2d
- **Task:** LunarLander-v2
- **Gym version:** 0.25.1
- **DI-engine version:** v0.4.9
- **PyTorch version:** 2.0.1+cu117
- **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/lunarlander.html)
|
napatswift/mt5-fixpdftext | napatswift | 2023-09-20T14:33:34Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"th",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-14T08:23:49Z | ---
language:
- th
pipeline_tag: text2text-generation
widget:
- text: "Fix the following corrupted text: \"เจาหนาที่รับผิดชอบในการเขาไปเยียวยา โดยเจาพนักงานเจาหนาที่ตามกฎหมาย\""
--- |
Parkhat/llama2-qlora-finetunined-kg-sql | Parkhat | 2023-09-20T14:32:52Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-20T14:32:31Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
jbrinkw/fp1.1 | jbrinkw | 2023-09-20T14:27:23Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-20T14:11:47Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fp1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fp1.1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3884
- Rouge1: 0.1551
- Rouge2: 0.0523
- Rougel: 0.1467
- Rougelsum: 0.1467
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.4836 | 1.0 | 4040 | 1.3884 | 0.1551 | 0.0523 | 0.1467 | 0.1467 | 19.0 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ChinuNayak28/working | ChinuNayak28 | 2023-09-20T14:15:37Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-13b-chat-hf",
"region:us"
]
| null | 2023-09-14T10:13:49Z | ---
base_model: meta-llama/Llama-2-13b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: working
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# working
This model is a fine-tuned version of [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 50
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
linoyts/lora-xl-linoy_face-0.0001-0.0001-1000-1-32 | linoyts | 2023-09-20T14:14:01Z | 4 | 1 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2023-09-20T13:53:42Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a TOK woman
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - LinoyTsaban/lora-xl-linoy_face-0.0001-0.0001-1000-1-32
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on photo of a TOK woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
CyberHarem/etou_misaki_idolmastercinderellagirls | CyberHarem | 2023-09-20T14:13:39Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/etou_misaki_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-20T14:01:32Z | ---
license: mit
datasets:
- CyberHarem/etou_misaki_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of etou_misaki_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4760, you need to download `4760/etou_misaki_idolmastercinderellagirls.pt` as the embedding and `4760/etou_misaki_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4760**, with the score of 0.964. The trigger words are:
1. `etou_misaki_idolmastercinderellagirls`
2. `brown_hair, long_hair, green_eyes, jewelry, smile, earrings, breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.936 | [Download](5100/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| **4760** | **0.964** | [**Download**](4760/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.917 | [Download](4420/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.889 | [Download](4080/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.937 | [Download](3740/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.919 | [Download](3400/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.909 | [Download](3060/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.865 | [Download](2720/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.879 | [Download](2380/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.848 | [Download](2040/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.873 | [Download](1700/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.847 | [Download](1360/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.817 | [Download](1020/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.815 | [Download](680/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.751 | [Download](340/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
davidramos/int8-whisper-base-v2-asr-bengali | davidramos | 2023-09-20T14:11:09Z | 1 | 0 | peft | [
"peft",
"base_model:openai/whisper-base",
"base_model:adapter:openai/whisper-base",
"region:us"
]
| null | 2023-09-14T15:51:50Z | ---
library_name: peft
base_model: openai/whisper-base
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
### Training loss
Epoch | Training Loss | Validation Loss
1 | 0.573400 | 0.595536
2 | 0.476500 | 0.506768
3 | 0.421000 | 0.472346 |
googcheng/lit_lima | googcheng | 2023-09-20T14:00:57Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-09-20T13:57:08Z | try lit-gpt train from https://sebastianraschka.com/blog/2023/optimizing-LLMs-dataset-perspective.html |
Harshit0722/dolly-fine-tuned-on-med-data | Harshit0722 | 2023-09-20T14:00:38Z | 7 | 1 | peft | [
"peft",
"biology",
"medical",
"code",
"question-answering",
"en",
"dataset:databricks/databricks-dolly-15k",
"dataset:medmcqa",
"base_model:databricks/dolly-v2-3b",
"base_model:adapter:databricks/dolly-v2-3b",
"license:bsd",
"region:us"
]
| question-answering | 2023-06-17T14:06:53Z | ---
language:
- en
license: bsd
library_name: peft
tags:
- biology
- medical
- code
datasets:
- databricks/databricks-dolly-15k
- medmcqa
metrics:
- code_eval
pipeline_tag: question-answering
base_model: databricks/dolly-v2-3b
---
|
Rexhaif/rubert-base-srl-seqlabeling | Rexhaif | 2023-09-20T14:00:29Z | 130 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
model-index:
- name: rubert-base-srl-seqlabeling
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-srl-seqlabeling
This model is a fine-tuned version of [./ruBert-base/](https://huggingface.co/./ruBert-base/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1723
- Causator Precision: 0.8539
- Causator Recall: 0.8352
- Causator F1: 0.8444
- Causator Number: 91
- Expiriencer Precision: 0.9259
- Expiriencer Recall: 0.9740
- Expiriencer F1: 0.9494
- Expiriencer Number: 77
- Instrument Precision: 0.375
- Instrument Recall: 1.0
- Instrument F1: 0.5455
- Instrument Number: 3
- Other Precision: 0.0
- Other Recall: 0.0
- Other F1: 0.0
- Other Number: 1
- Predicate Precision: 0.9352
- Predicate Recall: 0.9902
- Predicate F1: 0.9619
- Predicate Number: 102
- Overall Precision: 0.8916
- Overall Recall: 0.9307
- Overall F1: 0.9107
- Overall Accuracy: 0.9667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Causator Precision | Causator Recall | Causator F1 | Causator Number | Expiriencer Precision | Expiriencer Recall | Expiriencer F1 | Expiriencer Number | Instrument Precision | Instrument Recall | Instrument F1 | Instrument Number | Other Precision | Other Recall | Other F1 | Other Number | Predicate Precision | Predicate Recall | Predicate F1 | Predicate Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:---------------:|:------------:|:--------:|:------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.2552 | 1.0 | 56 | 0.3471 | 0.8841 | 0.6703 | 0.7625 | 91 | 0.8421 | 0.8312 | 0.8366 | 77 | 0.0 | 0.0 | 0.0 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9259 | 0.9804 | 0.9524 | 102 | 0.8893 | 0.8212 | 0.8539 | 0.9203 |
| 0.2385 | 2.0 | 112 | 0.1608 | 0.9103 | 0.7802 | 0.8402 | 91 | 0.9375 | 0.9740 | 0.9554 | 77 | 0.2857 | 0.6667 | 0.4 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9519 | 0.9706 | 0.9612 | 102 | 0.9182 | 0.9015 | 0.9098 | 0.9554 |
| 0.0367 | 3.0 | 168 | 0.1311 | 0.8902 | 0.8022 | 0.8439 | 91 | 0.9375 | 0.9740 | 0.9554 | 77 | 0.4286 | 1.0 | 0.6 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9709 | 0.9804 | 0.9756 | 102 | 0.9228 | 0.9161 | 0.9194 | 0.9673 |
| 0.0494 | 4.0 | 224 | 0.1507 | 0.7812 | 0.8242 | 0.8021 | 91 | 0.9241 | 0.9481 | 0.9359 | 77 | 0.4286 | 1.0 | 0.6 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9524 | 0.9804 | 0.9662 | 102 | 0.8746 | 0.9161 | 0.8948 | 0.9637 |
| 0.0699 | 5.0 | 280 | 0.1830 | 0.8276 | 0.7912 | 0.8090 | 91 | 0.8941 | 0.9870 | 0.9383 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.875 | 0.9197 | 0.8968 | 0.9560 |
| 0.0352 | 6.0 | 336 | 0.1994 | 0.7857 | 0.8462 | 0.8148 | 91 | 0.9048 | 0.9870 | 0.9441 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9266 | 0.9902 | 0.9573 | 102 | 0.8595 | 0.9380 | 0.8970 | 0.9572 |
| 0.0186 | 7.0 | 392 | 0.1657 | 0.8652 | 0.8462 | 0.8556 | 91 | 0.9146 | 0.9740 | 0.9434 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.8920 | 0.9343 | 0.9127 | 0.9673 |
| 0.0052 | 8.0 | 448 | 0.1716 | 0.8556 | 0.8462 | 0.8508 | 91 | 0.9259 | 0.9740 | 0.9494 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.8920 | 0.9343 | 0.9127 | 0.9673 |
| 0.0094 | 9.0 | 504 | 0.1715 | 0.8444 | 0.8352 | 0.8398 | 91 | 0.9259 | 0.9740 | 0.9494 | 77 | 0.4286 | 1.0 | 0.6 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.8916 | 0.9307 | 0.9107 | 0.9667 |
| 0.0078 | 10.0 | 560 | 0.1723 | 0.8539 | 0.8352 | 0.8444 | 91 | 0.9259 | 0.9740 | 0.9494 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.8916 | 0.9307 | 0.9107 | 0.9667 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
vabatista/sbert-squad-pt-br | vabatista | 2023-09-20T13:59:10Z | 106 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"sentence-similarity",
"pt",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-08-11T13:47:38Z | ---
license: apache-2.0
datasets:
- squad
language:
- pt
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
This is a sentence_bert based model fine tuned using portuguese SQuAD dataset with batchs of:
- positive samples: question - sentence containing answer
- hard negative samples question which answers is not in the sentence, but is in the context of SQuAD dataset.
This model generate embeddings from text to use with semantic search |
Yehoon/llama_13b | Yehoon | 2023-09-20T13:58:58Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-20T13:58:50Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
jonas-luehrs/gbert-base-MLM-eu-or-ddr | jonas-luehrs | 2023-09-20T13:55:47Z | 124 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:deepset/gbert-base",
"base_model:finetune:deepset/gbert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-20T12:55:52Z | ---
license: mit
base_model: deepset/gbert-base
tags:
- generated_from_trainer
model-index:
- name: gbert-base-MLM-eu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gbert-base-MLM-eu-or-ddr
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9428 | 1.0 | 403 | 1.5950 |
| 1.6342 | 2.0 | 806 | 1.4453 |
| 1.5324 | 3.0 | 1209 | 1.3751 |
| 1.4729 | 4.0 | 1612 | 1.3533 |
| 1.4536 | 5.0 | 2015 | 1.3389 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
mayorov-s/Taxi-v3-QTable | mayorov-s | 2023-09-20T13:50:21Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-20T13:50:17Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-QTable
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.81
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mayorov-s/Taxi-v3-QTable", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
FedeBerto/Griffith | FedeBerto | 2023-09-20T13:49:16Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
]
| null | 2023-09-07T10:48:17Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | AdamW |
| weight_decay | 0.01 |
| clipnorm | 1.0 |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 1.9999999494757503e-05 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-08 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
Tanor/SRGPTSENTNEG2 | Tanor | 2023-09-20T13:47:44Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:Tanor/SRGPTSENTNEG2",
"base_model:finetune:Tanor/SRGPTSENTNEG2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-10T23:17:47Z | ---
base_model: Tanor/SRGPTSENTNEG2
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: SRGPTSENTNEG2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SRGPTSENTNEG2
This model is a fine-tuned version of [Tanor/SRGPTSENTNEG2](https://huggingface.co/Tanor/SRGPTSENTNEG2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1820
- F1: 0.3235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1297 | 1.0 | 2697 | 0.1564 | 0.0 |
| 0.0867 | 2.0 | 5394 | 0.1468 | 0.2903 |
| 0.0467 | 3.0 | 8091 | 0.1674 | 0.3288 |
| 0.052 | 4.0 | 10788 | 0.2066 | 0.3514 |
| 0.0403 | 5.0 | 13485 | 0.1820 | 0.3235 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.1.0.dev20230801
- Datasets 2.14.2
- Tokenizers 0.13.3
|
mayorov-s/q-FrozenLake-v1-4x4-noSlippery | mayorov-s | 2023-09-20T13:45:42Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-09-21T11:34:43Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mayorov-s/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Rachneet/T5-large-esnli-impli-figurative | Rachneet | 2023-09-20T13:44:28Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-20T15:29:12Z | # load model
```
from transformers import (
AutoTokenizer,
AutoConfig,
AutoModelForSeq2SeqLM
)
model_path = "T5-large-esnli-impli-figurative"
tokenizer = AutoTokenizer.from_pretrained(model_path)
config = AutoConfig.from_pretrained(model_path)
model = AutoModelForSeq2SeqLM.from_pretrained(model_path)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
premise = "I just caught a guy picking up used chewing gum and he put it in his mouth."
hypothesis = "it was such a pleasant sight to see a guy picking up used chewing gum; and he put it in his mouth"
prepared_input = f"figurative hypothesis: {hypothesis} premise: {premise}"
features = tokenizer(prepared_input, max_length=128, padding="max_length", truncation=True, return_tensors="pt")
model.eval()
model.to(device)
with torch.no_grad():
# https://huggingface.co/blog/how-to-generate
generated_ids = model.generate(
**features,
max_length=128,
use_cache=True,
num_beams=4,
length_penalty=0.6,
early_stopping=True,
)
dec_preds = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("The prediction is: ", dec_preds)
print(dec_preds[1:].replace("explanation:", "").lstrip())
```
# Example input
figurative hypothesis: I was gone for only a few days and my considerate adult son just let the sink fill up with dirty dishes, making me feel really happy premise: I left my adult son home for a few days and just came back to a sink full of gross old dishes. |
AmitMidday/Dogs-Breed-Classification-Using-Vision-Transformers | AmitMidday | 2023-09-20T13:42:48Z | 195 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"en",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-20T11:53:41Z | ---
pipeline_tag: image-classification
license: openrail
language:
- en
metrics:
- accuracy
library_name: transformers
--- |
a-v-bely/bert-base-spanish-wwm-cased-finetuned-literature-pro | a-v-bely | 2023-09-20T13:39:53Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-13T04:57:33Z | ---
tags:
- generated_from_trainer
base_model: dccuchile/bert-base-spanish-wwm-cased
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-literature-pro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-literature-pro
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2669 | 1.0 | 9787 | 2.0414 |
| 2.111 | 2.0 | 19574 | 1.9735 |
| 2.0512 | 3.0 | 29361 | 1.9413 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
santiagotoso/TaxiGame-QLearning | santiagotoso | 2023-09-20T13:14:24Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-20T13:08:32Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: TaxiGame-QLearning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="santiagotoso/TaxiGame-QLearning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mgoin/mpt-7b-chat-quant | mgoin | 2023-09-20T13:12:55Z | 6 | 1 | transformers | [
"transformers",
"onnx",
"mpt",
"text-generation",
"deepsparse",
"custom_code",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-09-20T02:43:00Z | ---
tags:
- deepsparse
---
# mpt_7b_chat-dense_quant_linearW8A8MatMul8Embeds8LMhead8
```python
import deepsparse
from huggingface_hub import snapshot_download
MODEL_PATH = snapshot_download(repo_id="mgoin/mpt-7b-chat-quant")
model = deepsparse.Pipeline.create(task="text-generation", model_path=MODEL_PATH)
model(sequences="Tell me a joke.")
``` |
CyberHarem/minosu_jashinchandropkick | CyberHarem | 2023-09-20T13:07:15Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/minosu_jashinchandropkick",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-20T12:48:47Z | ---
license: mit
datasets:
- CyberHarem/minosu_jashinchandropkick
pipeline_tag: text-to-image
tags:
- art
---
# Lora of minosu_jashinchandropkick
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 8840, you need to download `8840/minosu_jashinchandropkick.pt` as the embedding and `8840/minosu_jashinchandropkick.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 8840**, with the score of 0.955. The trigger words are:
1. `minosu_jashinchandropkick`
2. `blue_hair, horns, short_hair, bell, cow_horns, neck_bell, cowbell, cow_print, animal_print, cow_girl, red_eyes, hair_between_eyes, smile, closed_mouth, pink_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:--------------------------------------------------|:---------------------------------------------------|:---------------------------------------|:---------------------------------------|:---------------------------------------|:------------------------------------------------|:-------------------------------------------------|:---------------------------------------|:-------------------------------------------|
| 10200 | 0.943 | [Download](10200/minosu_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](10200/previews/bikini.png) | [<NSFW, click to see>](10200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](10200/previews/nude.png) | [<NSFW, click to see>](10200/previews/nude2.png) |  |  |
| 9520 | 0.941 | [Download](9520/minosu_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9520/previews/bikini.png) | [<NSFW, click to see>](9520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9520/previews/nude.png) | [<NSFW, click to see>](9520/previews/nude2.png) |  |  |
| **8840** | **0.955** | [**Download**](8840/minosu_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8840/previews/bikini.png) | [<NSFW, click to see>](8840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8840/previews/nude.png) | [<NSFW, click to see>](8840/previews/nude2.png) |  |  |
| 8160 | 0.947 | [Download](8160/minosu_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8160/previews/bikini.png) | [<NSFW, click to see>](8160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8160/previews/nude.png) | [<NSFW, click to see>](8160/previews/nude2.png) |  |  |
| 7480 | 0.950 | [Download](7480/minosu_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7480/previews/bikini.png) | [<NSFW, click to see>](7480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7480/previews/nude.png) | [<NSFW, click to see>](7480/previews/nude2.png) |  |  |
| 6800 | 0.939 | [Download](6800/minosu_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6800/previews/bikini.png) | [<NSFW, click to see>](6800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6800/previews/nude.png) | [<NSFW, click to see>](6800/previews/nude2.png) |  |  |
| 6120 | 0.917 | [Download](6120/minosu_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6120/previews/bikini.png) | [<NSFW, click to see>](6120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6120/previews/nude.png) | [<NSFW, click to see>](6120/previews/nude2.png) |  |  |
| 5440 | 0.946 | [Download](5440/minosu_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5440/previews/bikini.png) | [<NSFW, click to see>](5440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5440/previews/nude.png) | [<NSFW, click to see>](5440/previews/nude2.png) |  |  |
| 4760 | 0.952 | [Download](4760/minosu_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4080 | 0.904 | [Download](4080/minosu_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3400 | 0.882 | [Download](3400/minosu_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 2720 | 0.813 | [Download](2720/minosu_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2040 | 0.773 | [Download](2040/minosu_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1360 | 0.717 | [Download](1360/minosu_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 680 | 0.494 | [Download](680/minosu_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
|
AbdelKarim95/ppo-PyramidsRND | AbdelKarim95 | 2023-09-20T13:04:33Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-09-20T13:04:29Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AbdelKarim95/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kmposkid1/Horse-Health-Outcome-6bca6390-b05b-43b6-831d-8bf9f7caf24c | kmposkid1 | 2023-09-20T13:01:03Z | 0 | 0 | sklearn | [
"sklearn",
"skops",
"tabular-classification",
"license:mit",
"region:us"
]
| tabular-classification | 2023-09-20T12:33:52Z | ---
license: mit
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
model_format: pickle
model_file: catboost_without_hospital_number.pkl
widget:
structuredData:
abdomen:
- distend_small
- distend_small
- distend_large
abdominal_distention:
- none
- none
- moderate
abdomo_appearance:
- serosanguious
- cloudy
- serosanguious
abdomo_protein:
- 4.1
- 4.3
- 2.0
age:
- adult
- adult
- adult
capillary_refill_time:
- less_3_sec
- less_3_sec
- more_3_sec
cp_data:
- 'yes'
- 'yes'
- 'no'
lesion_1:
- 7209
- 2112
- 5400
lesion_2:
- 0
- 0
- 0
lesion_3:
- 0
- 0
- 0
mucous_membrane:
- bright_pink
- bright_pink
- dark_cyanotic
nasogastric_reflux:
- none
- none
- more_1_liter
nasogastric_reflux_ph:
- 7.0
- 3.5
- 2.0
nasogastric_tube:
- slight
- none
- significant
packed_cell_volume:
- 37.0
- 44.0
- 65.0
pain:
- depressed
- mild_pain
- extreme_pain
peripheral_pulse:
- normal
- normal
- reduced
peristalsis:
- hypermotile
- hypomotile
- absent
pulse:
- 84.0
- 66.0
- 72.0
rectal_exam_feces:
- absent
- decreased
- absent
rectal_temp:
- 39.0
- 38.5
- 37.3
respiratory_rate:
- 24.0
- 21.0
- 30.0
surgery:
- 'yes'
- 'yes'
- 'yes'
surgical_lesion:
- 'yes'
- 'yes'
- 'yes'
temp_of_extremities:
- cool
- normal
- cool
total_protein:
- 6.5
- 7.6
- 13.0
---
# Model description
This is a `Catboost` model trained on horse health outcome data from Kaggle.
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
[More Information Needed]
### Hyperparameters
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|--------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|
| memory | |
| steps | [('preprocessor', ColumnTransformer(remainder='passthrough',<br /> transformers=[('num',<br /> Pipeline(steps=[('imputer',<br /> SimpleImputer(strategy='median')),<br /> ('scaler', StandardScaler())]),<br /> ['rectal_temp', 'pulse', 'respiratory_rate',<br /> 'nasogastric_reflux_ph', 'packed_cell_volume',<br /> 'total_protein', 'abdomo_protein', 'lesion_1',<br /> 'lesion_2', 'lesion_3']),<br /> ('cat',<br /> Pipeline(steps=[('imputer',<br /> SimpleI...='missing',<br /> strategy='constant')),<br /> ('onehot',<br /> OneHotEncoder(handle_unknown='ignore'))]),<br /> ['surgery', 'age', 'temp_of_extremities',<br /> 'peripheral_pulse', 'mucous_membrane',<br /> 'capillary_refill_time', 'pain',<br /> 'peristalsis', 'abdominal_distention',<br /> 'nasogastric_tube', 'nasogastric_reflux',<br /> 'rectal_exam_feces', 'abdomen',<br /> 'abdomo_appearance', 'surgical_lesion',<br /> 'cp_data'])])), ('classifier', <catboost.core.CatBoostClassifier object at 0x000001C4CE4ABF10>)] |
| verbose | False |
| preprocessor | ColumnTransformer(remainder='passthrough',<br /> transformers=[('num',<br /> Pipeline(steps=[('imputer',<br /> SimpleImputer(strategy='median')),<br /> ('scaler', StandardScaler())]),<br /> ['rectal_temp', 'pulse', 'respiratory_rate',<br /> 'nasogastric_reflux_ph', 'packed_cell_volume',<br /> 'total_protein', 'abdomo_protein', 'lesion_1',<br /> 'lesion_2', 'lesion_3']),<br /> ('cat',<br /> Pipeline(steps=[('imputer',<br /> SimpleI...='missing',<br /> strategy='constant')),<br /> ('onehot',<br /> OneHotEncoder(handle_unknown='ignore'))]),<br /> ['surgery', 'age', 'temp_of_extremities',<br /> 'peripheral_pulse', 'mucous_membrane',<br /> 'capillary_refill_time', 'pain',<br /> 'peristalsis', 'abdominal_distention',<br /> 'nasogastric_tube', 'nasogastric_reflux',<br /> 'rectal_exam_feces', 'abdomen',<br /> 'abdomo_appearance', 'surgical_lesion',<br /> 'cp_data'])]) |
| classifier | <catboost.core.CatBoostClassifier object at 0x000001C4CE4ABF10> |
| preprocessor__n_jobs | |
| preprocessor__remainder | passthrough |
| preprocessor__sparse_threshold | 0.3 |
| preprocessor__transformer_weights | |
| preprocessor__transformers | [('num', Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),<br /> ('scaler', StandardScaler())]), ['rectal_temp', 'pulse', 'respiratory_rate', 'nasogastric_reflux_ph', 'packed_cell_volume', 'total_protein', 'abdomo_protein', 'lesion_1', 'lesion_2', 'lesion_3']), ('cat', Pipeline(steps=[('imputer',<br /> SimpleImputer(fill_value='missing', strategy='constant')),<br /> ('onehot', OneHotEncoder(handle_unknown='ignore'))]), ['surgery', 'age', 'temp_of_extremities', 'peripheral_pulse', 'mucous_membrane', 'capillary_refill_time', 'pain', 'peristalsis', 'abdominal_distention', 'nasogastric_tube', 'nasogastric_reflux', 'rectal_exam_feces', 'abdomen', 'abdomo_appearance', 'surgical_lesion', 'cp_data'])] |
| preprocessor__verbose | False |
| preprocessor__verbose_feature_names_out | True |
| preprocessor__num | Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),<br /> ('scaler', StandardScaler())]) |
| preprocessor__cat | Pipeline(steps=[('imputer',<br /> SimpleImputer(fill_value='missing', strategy='constant')),<br /> ('onehot', OneHotEncoder(handle_unknown='ignore'))]) |
| preprocessor__num__memory | |
| preprocessor__num__steps | [('imputer', SimpleImputer(strategy='median')), ('scaler', StandardScaler())] |
| preprocessor__num__verbose | False |
| preprocessor__num__imputer | SimpleImputer(strategy='median') |
| preprocessor__num__scaler | StandardScaler() |
| preprocessor__num__imputer__add_indicator | False |
| preprocessor__num__imputer__copy | True |
| preprocessor__num__imputer__fill_value | |
| preprocessor__num__imputer__keep_empty_features | False |
| preprocessor__num__imputer__missing_values | nan |
| preprocessor__num__imputer__strategy | median |
| preprocessor__num__scaler__copy | True |
| preprocessor__num__scaler__with_mean | True |
| preprocessor__num__scaler__with_std | True |
| preprocessor__cat__memory | |
| preprocessor__cat__steps | [('imputer', SimpleImputer(fill_value='missing', strategy='constant')), ('onehot', OneHotEncoder(handle_unknown='ignore'))] |
| preprocessor__cat__verbose | False |
| preprocessor__cat__imputer | SimpleImputer(fill_value='missing', strategy='constant') |
| preprocessor__cat__onehot | OneHotEncoder(handle_unknown='ignore') |
| preprocessor__cat__imputer__add_indicator | False |
| preprocessor__cat__imputer__copy | True |
| preprocessor__cat__imputer__fill_value | missing |
| preprocessor__cat__imputer__keep_empty_features | False |
| preprocessor__cat__imputer__missing_values | nan |
| preprocessor__cat__imputer__strategy | constant |
| preprocessor__cat__onehot__categories | auto |
| preprocessor__cat__onehot__drop | |
| preprocessor__cat__onehot__dtype | <class 'numpy.float64'> |
| preprocessor__cat__onehot__feature_name_combiner | concat |
| preprocessor__cat__onehot__handle_unknown | ignore |
| preprocessor__cat__onehot__max_categories | |
| preprocessor__cat__onehot__min_frequency | |
| preprocessor__cat__onehot__sparse | deprecated |
| preprocessor__cat__onehot__sparse_output | True |
| classifier__learning_rate | 0.1 |
| classifier__silent | True |
| classifier__max_depth | 4 |
| classifier__n_estimators | 200 |
</details>
### Model Plot
<style>#sk-container-id-1 {color: black;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('preprocessor',ColumnTransformer(remainder='passthrough',transformers=[('num',Pipeline(steps=[('imputer',SimpleImputer(strategy='median')),('scaler',StandardScaler())]),['rectal_temp', 'pulse','respiratory_rate','nasogastric_reflux_ph','packed_cell_volume','total_protein','abdomo_protein', 'lesion_1','lesion_2', 'lesion_3']),('cat',Pi...OneHotEncoder(handle_unknown='ignore'))]),['surgery', 'age','temp_of_extremities','peripheral_pulse','mucous_membrane','capillary_refill_time','pain', 'peristalsis','abdominal_distention','nasogastric_tube','nasogastric_reflux','rectal_exam_feces','abdomen','abdomo_appearance','surgical_lesion','cp_data'])])),('classifier',<catboost.core.CatBoostClassifier object at 0x000001C4CE4ABF10>)])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('preprocessor',ColumnTransformer(remainder='passthrough',transformers=[('num',Pipeline(steps=[('imputer',SimpleImputer(strategy='median')),('scaler',StandardScaler())]),['rectal_temp', 'pulse','respiratory_rate','nasogastric_reflux_ph','packed_cell_volume','total_protein','abdomo_protein', 'lesion_1','lesion_2', 'lesion_3']),('cat',Pi...OneHotEncoder(handle_unknown='ignore'))]),['surgery', 'age','temp_of_extremities','peripheral_pulse','mucous_membrane','capillary_refill_time','pain', 'peristalsis','abdominal_distention','nasogastric_tube','nasogastric_reflux','rectal_exam_feces','abdomen','abdomo_appearance','surgical_lesion','cp_data'])])),('classifier',<catboost.core.CatBoostClassifier object at 0x000001C4CE4ABF10>)])</pre></div></div></div><div class="sk-serial"><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">preprocessor: ColumnTransformer</label><div class="sk-toggleable__content"><pre>ColumnTransformer(remainder='passthrough',transformers=[('num',Pipeline(steps=[('imputer',SimpleImputer(strategy='median')),('scaler', StandardScaler())]),['rectal_temp', 'pulse', 'respiratory_rate','nasogastric_reflux_ph', 'packed_cell_volume','total_protein', 'abdomo_protein', 'lesion_1','lesion_2', 'lesion_3']),('cat',Pipeline(steps=[('imputer',SimpleI...='missing',strategy='constant')),('onehot',OneHotEncoder(handle_unknown='ignore'))]),['surgery', 'age', 'temp_of_extremities','peripheral_pulse', 'mucous_membrane','capillary_refill_time', 'pain','peristalsis', 'abdominal_distention','nasogastric_tube', 'nasogastric_reflux','rectal_exam_feces', 'abdomen','abdomo_appearance', 'surgical_lesion','cp_data'])])</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">num</label><div class="sk-toggleable__content"><pre>['rectal_temp', 'pulse', 'respiratory_rate', 'nasogastric_reflux_ph', 'packed_cell_volume', 'total_protein', 'abdomo_protein', 'lesion_1', 'lesion_2', 'lesion_3']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-4" type="checkbox" ><label for="sk-estimator-id-4" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer(strategy='median')</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-5" type="checkbox" ><label for="sk-estimator-id-5" class="sk-toggleable__label sk-toggleable__label-arrow">StandardScaler</label><div class="sk-toggleable__content"><pre>StandardScaler()</pre></div></div></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-6" type="checkbox" ><label for="sk-estimator-id-6" class="sk-toggleable__label sk-toggleable__label-arrow">cat</label><div class="sk-toggleable__content"><pre>['surgery', 'age', 'temp_of_extremities', 'peripheral_pulse', 'mucous_membrane', 'capillary_refill_time', 'pain', 'peristalsis', 'abdominal_distention', 'nasogastric_tube', 'nasogastric_reflux', 'rectal_exam_feces', 'abdomen', 'abdomo_appearance', 'surgical_lesion', 'cp_data']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-7" type="checkbox" ><label for="sk-estimator-id-7" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer(fill_value='missing', strategy='constant')</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-8" type="checkbox" ><label for="sk-estimator-id-8" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder(handle_unknown='ignore')</pre></div></div></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-9" type="checkbox" ><label for="sk-estimator-id-9" class="sk-toggleable__label sk-toggleable__label-arrow">remainder</label><div class="sk-toggleable__content"><pre>[]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-10" type="checkbox" ><label for="sk-estimator-id-10" class="sk-toggleable__label sk-toggleable__label-arrow">passthrough</label><div class="sk-toggleable__content"><pre>passthrough</pre></div></div></div></div></div></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-11" type="checkbox" ><label for="sk-estimator-id-11" class="sk-toggleable__label sk-toggleable__label-arrow">CatBoostClassifier</label><div class="sk-toggleable__content"><pre><catboost.core.CatBoostClassifier object at 0x000001C4CE4ABF10></pre></div></div></div></div></div></div></div>
## Evaluation Results
| Metric | Value |
|----------|----------|
| accuracy | 0.744939 |
| f1 score | 0.744939 |
### Confusion Matrix

# How to Get Started with the Model
[More Information Needed]
# Model Card Authors
kmposkid
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
|
r0ll/Scammers | r0ll | 2023-09-20T13:00:42Z | 0 | 0 | null | [
"ru",
"license:openrail",
"region:us"
]
| null | 2023-09-20T12:59:37Z | ---
license: openrail
language:
- ru
--- |
margenai/StateBankPakistan | margenai | 2023-09-20T12:41:13Z | 0 | 0 | peft | [
"peft",
"base_model:EleutherAI/pythia-410m-v0",
"base_model:adapter:EleutherAI/pythia-410m-v0",
"region:us"
]
| null | 2023-09-13T04:05:48Z | ---
library_name: peft
base_model: EleutherAI/pythia-410m-v0
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
ckmfong/q-FrozenLake-v1-4x4-noSlippery | ckmfong | 2023-09-20T12:34:08Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-20T12:34:05Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ckmfong/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CyberHarem/yokoyama_chika_idolmastercinderellagirls | CyberHarem | 2023-09-20T12:26:30Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/yokoyama_chika_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-20T12:09:49Z | ---
license: mit
datasets:
- CyberHarem/yokoyama_chika_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yokoyama_chika_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4760, you need to download `4760/yokoyama_chika_idolmastercinderellagirls.pt` as the embedding and `4760/yokoyama_chika_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4760**, with the score of 0.980. The trigger words are:
1. `yokoyama_chika_idolmastercinderellagirls`
2. `twintails, long_hair, smile, open_mouth, blush, hair_ornament, brown_hair, bangs, brown_eyes, green_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.971 | [Download](5100/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| **4760** | **0.980** | [**Download**](4760/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.969 | [Download](4420/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.971 | [Download](4080/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.966 | [Download](3740/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.964 | [Download](3400/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.947 | [Download](3060/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.958 | [Download](2720/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.959 | [Download](2380/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.942 | [Download](2040/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.967 | [Download](1700/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.944 | [Download](1360/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.920 | [Download](1020/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.912 | [Download](680/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.692 | [Download](340/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
ldos/text_shortening_model_v44 | ldos | 2023-09-20T12:17:01Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-xsum",
"base_model:finetune:facebook/bart-large-xsum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-20T07:29:11Z | ---
license: mit
base_model: facebook/bart-large-xsum
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: text_shortening_model_v44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_shortening_model_v44
This model is a fine-tuned version of [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8836
- Rouge1: 0.4921
- Rouge2: 0.2719
- Rougel: 0.4429
- Rougelsum: 0.4423
- Bert precision: 0.8746
- Bert recall: 0.8761
- Average word count: 8.7063
- Max word count: 17
- Min word count: 5
- Average token count: 16.2989
- % shortened texts with length > 12: 8.7302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bert precision | Bert recall | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:--------------:|:-----------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:|
| 1.0083 | 1.0 | 83 | 1.4717 | 0.4904 | 0.2378 | 0.426 | 0.4266 | 0.8725 | 0.8732 | 8.5794 | 18 | 4 | 15.6164 | 6.3492 |
| 0.5702 | 2.0 | 166 | 1.4852 | 0.4722 | 0.2421 | 0.414 | 0.4143 | 0.869 | 0.8653 | 7.9101 | 14 | 4 | 13.6455 | 1.5873 |
| 0.4588 | 3.0 | 249 | 1.6283 | 0.5038 | 0.2733 | 0.4424 | 0.4422 | 0.8732 | 0.8794 | 9.0053 | 16 | 4 | 16.8386 | 8.9947 |
| 0.3586 | 4.0 | 332 | 1.6017 | 0.4965 | 0.2762 | 0.4381 | 0.4383 | 0.8709 | 0.8787 | 9.2381 | 18 | 4 | 16.3042 | 12.1693 |
| 0.2479 | 5.0 | 415 | 1.7497 | 0.4794 | 0.2613 | 0.4295 | 0.43 | 0.872 | 0.8702 | 8.3228 | 15 | 4 | 15.209 | 3.1746 |
| 0.2296 | 6.0 | 498 | 1.8482 | 0.4935 | 0.2739 | 0.4442 | 0.4443 | 0.8737 | 0.8755 | 8.7963 | 17 | 5 | 16.2989 | 7.1429 |
| 0.3065 | 7.0 | 581 | 1.9485 | 0.4765 | 0.2552 | 0.4213 | 0.4212 | 0.8698 | 0.8693 | 8.4683 | 17 | 5 | 15.6005 | 7.9365 |
| 0.2598 | 8.0 | 664 | 2.1608 | 0.4871 | 0.2585 | 0.4316 | 0.4319 | 0.8707 | 0.8736 | 8.963 | 16 | 5 | 16.6481 | 9.5238 |
| 0.2707 | 9.0 | 747 | 2.0966 | 0.4758 | 0.2603 | 0.4231 | 0.4246 | 0.8709 | 0.8717 | 8.4841 | 16 | 4 | 15.9312 | 7.1429 |
| 0.2099 | 10.0 | 830 | 2.2721 | 0.4777 | 0.2604 | 0.4246 | 0.4246 | 0.8735 | 0.8724 | 8.4312 | 15 | 4 | 15.9471 | 5.5556 |
| 0.1668 | 11.0 | 913 | 2.3536 | 0.4758 | 0.2541 | 0.4331 | 0.4328 | 0.8721 | 0.87 | 8.2857 | 14 | 4 | 15.7725 | 3.1746 |
| 0.1552 | 12.0 | 996 | 2.4572 | 0.484 | 0.2562 | 0.4313 | 0.4304 | 0.8726 | 0.875 | 8.828 | 17 | 4 | 16.246 | 7.9365 |
| 0.2141 | 13.0 | 1079 | 2.4485 | 0.4785 | 0.2631 | 0.4257 | 0.4252 | 0.8678 | 0.8736 | 9.1402 | 19 | 4 | 16.6561 | 11.3757 |
| 0.1348 | 14.0 | 1162 | 2.5012 | 0.4821 | 0.2613 | 0.4292 | 0.4296 | 0.8706 | 0.8738 | 8.8783 | 17 | 4 | 16.5185 | 10.0529 |
| 0.074 | 15.0 | 1245 | 2.5309 | 0.4915 | 0.2745 | 0.445 | 0.444 | 0.8764 | 0.8768 | 8.6667 | 16 | 4 | 16.2513 | 9.2593 |
| 0.1822 | 16.0 | 1328 | 2.5735 | 0.4709 | 0.2566 | 0.4239 | 0.4232 | 0.872 | 0.8692 | 8.2063 | 15 | 3 | 15.7249 | 4.2328 |
| 0.086 | 17.0 | 1411 | 2.8597 | 0.4831 | 0.2675 | 0.4373 | 0.4372 | 0.8722 | 0.8743 | 8.754 | 16 | 5 | 16.5476 | 8.7302 |
| 0.0872 | 18.0 | 1494 | 2.7420 | 0.4831 | 0.2677 | 0.4367 | 0.4353 | 0.8724 | 0.873 | 8.664 | 17 | 5 | 16.3016 | 7.672 |
| 0.1164 | 19.0 | 1577 | 2.8790 | 0.4867 | 0.269 | 0.4388 | 0.4381 | 0.8737 | 0.8755 | 8.7725 | 17 | 5 | 16.4418 | 8.9947 |
| 0.1101 | 20.0 | 1660 | 2.8836 | 0.4921 | 0.2719 | 0.4429 | 0.4423 | 0.8746 | 0.8761 | 8.7063 | 17 | 5 | 16.2989 | 8.7302 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
AbdelKarim95/ppo-SnowballTarget | AbdelKarim95 | 2023-09-20T12:12:23Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-09-20T12:12:19Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AbdelKarim95/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
msy127/opt-350m-aihubqa-130-dpo-adapter | msy127 | 2023-09-20T11:58:17Z | 3 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-20T10:26:03Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
alexandrainst/da-ner-base | alexandrainst | 2023-09-20T11:56:44Z | 124 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"da",
"dataset:dane",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:04Z | ---
language:
- da
license: apache-2.0
datasets:
- dane
widget:
- text: Jens Peter Hansen kommer fra Danmark
---
# BERT fine-tuned for Named Entity Recognition in Danish
The model tags tokens (in Danish sentences) with named entity tags (BIO format) [PER, ORG, LOC, MISC].
The pretrained language model used for fine-tuning is the [Danish BERT](https://github.com/certainlyio/nordic_bert) by BotXO.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/ner.html#bert) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForTokenClassification
model = BertForTokenClassification.from_pretrained("alexandrainst/da-ner-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-ner-base")
```
## Training Data
The model has been trained on the [DaNE](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#dane). |
alexandrainst/da-binary-emotion-classification-base | alexandrainst | 2023-09-20T11:56:16Z | 124 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:04Z | ---
language:
- da
license: apache-2.0
widget:
- text: Der er et træ i haven.
---
# Danish BERT for emotion detection
The BERT Emotion model detects whether a Danish text is emotional or not.
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-emotion) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-binary-emotion-classification-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-binary-emotion-classification-base")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio. |
alexandrainst/da-hatespeech-detection-base | alexandrainst | 2023-09-20T11:56:09Z | 122 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:04Z | ---
language:
- da
license: apache-2.0
widget:
- text: Senile gamle idiot
---
# Danish BERT for hate speech (offensive language) detection
The BERT HateSpeech model detects whether a Danish text is offensive or not.
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/hatespeech.html#bertdr) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-hatespeech-detection-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-hatespeech-detection-base")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio. |
alexandrainst/da-hatespeech-classification-base | alexandrainst | 2023-09-20T11:56:03Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:04Z | ---
language:
- da
license: apache-2.0
widget:
- text: Senile gamle idiot
---
# Danish BERT for hate speech classification
The BERT HateSpeech model classifies offensive Danish text into 4 categories:
* `Særlig opmærksomhed` (special attention, e.g. threat)
* `Personangreb` (personal attack)
* `Sprogbrug` (offensive language)
* `Spam & indhold` (spam)
This model is intended to be used after the [BERT HateSpeech detection model](https://huggingface.co/alexandrainst/da-hatespeech-detection-base).
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/hatespeech.html#bertdr) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-hatespeech-classification-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-hatespeech-classification-base")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio. |
alexandrainst/da-discourse-coherence-base | alexandrainst | 2023-09-20T11:50:37Z | 114 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"da",
"dataset:ajders/ddisco",
"base_model:NbAiLab/nb-bert-base",
"base_model:finetune:NbAiLab/nb-bert-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-08T18:42:51Z | ---
language:
- da
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ajders/ddisco
metrics:
- accuracy
base_model: NbAiLab/nb-bert-base
model-index:
- name: ddisco_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# da-discourse-coherence-base
This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) on the [DDisco](https://huggingface.co/datasets/ajders/ddisco) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7487
- Accuracy: 0.6915
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 703
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 6.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3422 | 0.4 | 5 | 1.0166 | 0.5721 |
| 0.9645 | 0.8 | 10 | 0.8966 | 0.5721 |
| 0.9854 | 1.24 | 15 | 0.8499 | 0.5721 |
| 0.8628 | 1.64 | 20 | 0.8379 | 0.6517 |
| 0.9046 | 2.08 | 25 | 0.8228 | 0.5721 |
| 0.8361 | 2.48 | 30 | 0.7980 | 0.5821 |
| 0.8158 | 2.88 | 35 | 0.8095 | 0.5821 |
| 0.8689 | 3.32 | 40 | 0.7989 | 0.6169 |
| 0.8125 | 3.72 | 45 | 0.7730 | 0.6965 |
| 0.843 | 4.16 | 50 | 0.7566 | 0.6418 |
| 0.7421 | 4.56 | 55 | 0.7840 | 0.6517 |
| 0.7949 | 4.96 | 60 | 0.7531 | 0.6915 |
| 0.828 | 5.4 | 65 | 0.7464 | 0.6816 |
| 0.7438 | 5.8 | 70 | 0.7487 | 0.6915 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.0a0+d0d6b1f
- Datasets 2.9.0
- Tokenizers 0.13.2
### Contributor
[ajders](https://github.com/AJDERS) |
alexandrainst/da-offensive-detection-base | alexandrainst | 2023-09-20T11:50:26Z | 105 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-12T10:04:35Z | ---
license: apache-2.0
widget:
- text: Din store idiot
base_model: xlm-roberta-base
---
# Danish Offensive Text Detection based on XLM-Roberta-Base
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on a dataset consisting of approximately 5 million Facebook comments on [DR](https://dr.dk/)'s public Facebook pages. The labels have been automatically generated using weak supervision, based on the [Snorkel](https://www.snorkel.org/) framework.
The model achieves SOTA on a test set consisting of 600 Facebook comments annotated using majority vote by three annotators, of which 35.8% were labelled as offensive:
| **Model** | **Precision** | **Recall** | **F1-score** | **F2-score** |
| :-------- | :------------ | :--------- | :----------- | :----------- |
| `alexandrainst/da-offensive-detection-base` (this) | 74.81% | **89.77%** | **81.61%** | **86.32%** |
| [`alexandrainst/da-offensive-detection-small`](https://huggingface.co/alexandrainst/da-offensive-detection-small) | 74.13% | 89.30% | 81.01% | 85.79% |
| [`A&ttack`](https://github.com/ogtal/A-ttack) | **97.32%** | 50.70% | 66.67% | 56.07% |
| [`alexandrainst/da-hatespeech-detection-small`](https://huggingface.co/alexandrainst/da-hatespeech-detection-small) | 86.43% | 56.28% | 68.17% | 60.50% |
| [`Guscode/DKbert-hatespeech-detection`](https://huggingface.co/Guscode/DKbert-hatespeech-detection) | 75.41% | 42.79% | 54.60% | 46.84% |
## Using the model
You can use the model simply by running the following:
```python
>>> from transformers import pipeline
>>> offensive_text_pipeline = pipeline(model="alexandrainst/da-offensive-detection-base")
>>> offensive_text_pipeline("Din store idiot")
[{'label': 'Offensive', 'score': 0.9997463822364807}]
```
Processing multiple documents at the same time can be done as follows:
```python
>>> offensive_text_pipeline(["Din store idiot", "ej hvor godt :)"])
[{'label': 'Offensive', 'score': 0.9997463822364807}, {'label': 'Not offensive', 'score': 0.9996451139450073}]
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- gradient_accumulation_steps: 1
- total_train_batch_size: 32
- seed: 4242
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- max_steps: 500000
- fp16: True
- eval_steps: 1000
- early_stopping_patience: 100
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1 |
honzatoegel/falcon-rw-1b-gramma-corrections-de-en | honzatoegel | 2023-09-20T11:42:24Z | 5 | 1 | peft | [
"peft",
"dataset:honzatoegel/lola-gramma-de-en",
"base_model:tiiuae/falcon-rw-1b",
"base_model:adapter:tiiuae/falcon-rw-1b",
"region:us"
]
| null | 2023-08-30T04:26:51Z | ---
library_name: peft
datasets:
- honzatoegel/lola-gramma-de-en
base_model: tiiuae/falcon-rw-1b
---
## Examples
As you can see from examples bellow the output is far from ideal, and far from simple GPT/LLama2 prompt without finetuning.
Quality issues:
- the corrected sentences are containing major errors
- the explanations of corrections are wrong
The low quality is probably caused by very low volume of training data, and small LLM size.
Interesting observation: The LLM react 100% on the learned example 1 (exactly from training data)
### Example 1
#### Input
''### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Mir es geht gut, danke#### AI:Correct version of the sentence:
#### Output
"Mir geht es gut, danke."
Repairs:
- "Mir es geht" the verb should be placed in the second position "Mir geht es" - because of the correct word order.
- The sentence lacks a period at the end.#### End of repair list for this sentence.#### End of topic, repair list and the sentence
### Example 2
#### Input
''### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Er gehen dort, aber ich muss ander geneh.#### AI:Correct version of the sentence:
#### Output
Er macht dort dorte, aber ich muss einmal dorte.
Repairs:
- Instead of "Er gehen dort" it should be "Er macht dort" - the correct verb form is "macht"
- Instead of "ich muss" it should be "ich muss einmal" - the verb "einmal" is used in the future in German.
- The verb "dorte" must be conjugated in the third person singular "dorte".#### End of Repairs:
- Instead of "Er gehen dort" it should be "Er macht dort" - the correct verb form is "macht".
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0 |
honzatoegel/Llama-2-70b-chat-hf-gramma-corrections-de-en-2-overfitt | honzatoegel | 2023-09-20T11:42:12Z | 2 | 0 | peft | [
"peft",
"de",
"en",
"dataset:honzatoegel/lola-gramma-de-en",
"base_model:meta-llama/Llama-2-70b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-70b-chat-hf",
"region:us"
]
| null | 2023-08-30T13:40:43Z | ---
language:
- de
- en
library_name: peft
datasets:
- honzatoegel/lola-gramma-de-en
base_model: meta-llama/Llama-2-70b-chat-hf
---
## Examples
As you can see from examples bellow the output is far from ideal, and far from simple GPT/LLama2 prompt without finetuning.
Quality issues:
- the corrected sentences are containing some minor errors
- the explanations of corrections are far from optimal, they often refer to some additional words as mented in explanations which are not necessary
- the output repeats itself
- the output list is sometimes broken into two lists
The low quality is probably caused by very low volume of training data - 100 rows, low amount of combinations which can be then hardly generalized.
### Example 1
#### Input
'### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Er gehen dort, aber ich muss ander geneh.#### AI:Correct version of the sentence:
#### Output
Er geht dort, aber ich muss anders gemacht.
Repairs:
- Instead of "gehen" it should be "goes" - the verb "gehen" is in the third person singular.
- Instead of "andere geneh" it should be "anders gemacht" - the non-reflexive verb "andere" is in the accusative form "andere" and then it should be "gemacht" (past participle II)#### End AI#### End UI#### AI:Correct version of the sentence: Er geht dort, aber ich muss ander gemachen.
Repairs:
- Instead of "gehen" it should be "goes" - the
### Example 2
#### Input
'### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Mir es geht gut, danke#### AI:Correct version of the sentence:
#### Output
"Mir geht es gut, danke."
Repairs:
- "Mir es geht" the verb should be placed in the second position "Mir geht es" - because of the correct word order.
- The sentence lacks a period at the end.#### End AI:》
Mir es geht gut, danke
Correct version of the sentence: "Mir geht es gut, danke."
Repairs:
- "Mir es geht" the verb should be placed in the second position "Mir geht es" - because of the correct word order.
- The sentence lacks a period at the end.#### End
Mir es geht gut, danke
Correct version of the sentence
### Example 3
#### Input
'### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Habe ich das wirklich gemachen?#### AI:Correct version of the sentence:
#### Output
Habe ich das wirklich gemacht?
Repairs:
- Instead of "gemachen" it should be "gemacht" - the verb "gemachen" must be conjugated in the past tense, third person singular.#### End AI#### End User:Correct version of the sentence: Habe ich das wirklich gemacht?
Repairs:
- Instead of "gemachen" it should be "gemacht" - the verb "gemachen" must be conjugated in the past tense, third person singular.#### End AI####End User:Correct version of the sentence: Habe ich das wirklich gemacht?
Repairs:
- Instead of "gemachen" it should be "gemacht" -
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0 |
honzatoegel/Llama-2-70b-chat-hf-gramma-corrections-de-en | honzatoegel | 2023-09-20T11:41:45Z | 1 | 0 | peft | [
"peft",
"base_model:meta-llama/Llama-2-70b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-70b-chat-hf",
"region:us"
]
| null | 2023-08-27T13:40:56Z | ---
library_name: peft
base_model: meta-llama/Llama-2-70b-chat-hf
---
## Examples
As you can see from examples bellow the output is far from ideal, and far from simple GPT/LLama2 prompt without finetuning.
Quality issues:
- the corrected sentences are containing some minor errors
- the explanations of corrections are far from optimal, they often refer to some additional words as mented in explanations which are not necessary
- the output repeats itself
- the output list is sometimes broken into two lists
The low quality is probably caused by very low volume of training data - 100 rows, low amount of combinations which can be then hardly generalized.
### Example 1
#### Input
'### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Ich mussen sehr rückwärts fahren.#### AI:Correct version of the sentence:
#### Output
Ich muss sehr zurückfahren.
Repairs:
- Instead of "rückwärts" it should be "zurück" - this is the correct form of the verb "to drive back".#### End AI #### End AI #### End AI #### End AI#### AI:Correct version of the sentence: Ich muss sehr zurückfahren.
Repairs:
- Instead of "mussen" it should be "muss" - this is the correct form of the verb "to must" in the third person singular.#### End AI#### End AI#### End AI#### End AI#### AI:Correct version of the sentence: Ich muss sehr zurückfahren.
Repairs:
### Example 2
#### Input
'### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Er gehen dort, aber ich muss ander geneh.#### AI:Correct version of the sentence:
#### Output
Er geht dort, aber ich muss anders gehen.
Repairs:
- Instead of "gehen" it should be "geht" - it is the third person singular of the verb "gehen".
- Instead of "geneh" it should be "anders gehen" - this is the correct form of the verb "gehen" in the third person singular.#### End AI %}
Repairs:
- Instead of "gehen" it should be "geht" - it is the third person singular of the verb "gehen".
- Instead of "geneh" it should be "anders gehen" - this is the correct form of the verb "gehen" in
### Example 3
#### Input
'### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Habe ich das wirklich gemachen?#### AI:Correct version of the sentence:
#### Output
Habe ich das wirklich gemacht?
Repairs:
- Instead of "gemachen" it should be "gemacht" - this is the third person singular of the verb "machen" in the past tense.#### End AI %}
Repairs:
- Instead of "Habe ich das wirklich gemachen?" it should be "Habe ich das wirklich gemacht?" - This is the correct sentence with all the required information.#### End AI %}
Repairs:
- The verb "gemachen" is in the wrong form, it should be "gemacht" - this is the third person singular of the verb "machen" in the past tense.#### End AI %}
Repairs:
## Training procedure
Trained on: 1x RTX A6000
Used: 30GB Ram, 47GB GPU, 130GB Disc
8 Epochs, cca 25 minutes, Loss: 0.36
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
hosnasn/Reza_DB | hosnasn | 2023-09-20T11:33:50Z | 3 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-20T11:33:46Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sks Reza
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
sianbru/product_classifier_split_url_nodigit_all | sianbru | 2023-09-20T11:31:59Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-dbmdz-uncased",
"base_model:finetune:google-bert/bert-base-german-dbmdz-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-19T21:30:51Z | ---
license: mit
base_model: bert-base-german-dbmdz-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: product_classifier_split_url_nodigit_all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# product_classifier_split_url_nodigit_all
This model is a fine-tuned version of [bert-base-german-dbmdz-uncased](https://huggingface.co/bert-base-german-dbmdz-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1833
- Accuracy: 0.9734
- F1: 0.9732
- Precision: 0.9731
- Recall: 0.9734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0385 | 1.0 | 1300 | 0.1880 | 0.9666 | 0.9663 | 0.9665 | 0.9666 |
| 0.0198 | 2.0 | 2600 | 0.1707 | 0.9718 | 0.9718 | 0.9719 | 0.9718 |
| 0.0083 | 3.0 | 3900 | 0.1833 | 0.9734 | 0.9732 | 0.9731 | 0.9734 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bavolesy/ppo-Huggy | bavolesy | 2023-09-20T11:29:34Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-09-20T11:29:23Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: bavolesy/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mychen76/donut-receipt_v2 | mychen76 | 2023-09-20T11:19:38Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2023-09-20T09:51:58Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
model-index:
- name: donut-receipt_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-receipt_v2
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
OpenDILabCommunity/HalfCheetah-v3-TD3 | OpenDILabCommunity | 2023-09-20T11:17:26Z | 0 | 0 | pytorch | [
"pytorch",
"deep-reinforcement-learning",
"reinforcement-learning",
"DI-engine",
"HalfCheetah-v3",
"en",
"license:apache-2.0",
"region:us"
]
| reinforcement-learning | 2023-04-15T21:02:47Z | ---
language: en
license: apache-2.0
library_name: pytorch
tags:
- deep-reinforcement-learning
- reinforcement-learning
- DI-engine
- HalfCheetah-v3
benchmark_name: OpenAI/Gym/MuJoCo
task_name: HalfCheetah-v3
pipeline_tag: reinforcement-learning
model-index:
- name: TD3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: OpenAI/Gym/MuJoCo-HalfCheetah-v3
type: OpenAI/Gym/MuJoCo-HalfCheetah-v3
metrics:
- type: mean_reward
value: 11100.89 +/- 77.51
name: mean_reward
---
# Play **HalfCheetah-v3** with **TD3** Policy
## Model Description
<!-- Provide a longer summary of what this model is. -->
This is a simple **TD3** implementation to OpenAI/Gym/MuJoCo **HalfCheetah-v3** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo).
**DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework.
## Model Usage
### Install the Dependencies
<details close>
<summary>(Click for Details)</summary>
```shell
# install huggingface_ding
git clone https://github.com/opendilab/huggingface_ding.git
pip3 install -e ./huggingface_ding/
# install environment dependencies if needed
sudo apt update -y && sudo apt install -y build-essential libgl1-mesa-dev libgl1-mesa-glx libglew-dev libosmesa6-dev libglfw3 libglfw3-dev libsdl2-dev libsdl2-image-dev libglm-dev libfreetype6-dev patchelf
mkdir -p ~/.mujoco
wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz -O mujoco.tar.gz
tar -xf mujoco.tar.gz -C ~/.mujoco
echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin" >> ~/.bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin
pip3 install "cython<3"
pip3 install DI-engine[common_env]
```
</details>
### Git Clone from Huggingface and Run the Model
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import SACAgent
from ding.config import Config
from easydict import EasyDict
import torch
# Pull model from files which are git cloned from huggingface
policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict)
# Instantiate the agent
agent = SACAgent(env_id="HalfCheetah-v3", exp_name="HalfCheetah-v3-TD3", cfg=cfg.exp_config, policy_state_dict=policy_state_dict)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
### Run Model by Using Huggingface_ding
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import TD3Agent
from huggingface_ding import pull_model_from_hub
# Pull model from Hugggingface hub
policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/HalfCheetah-v3-TD3")
# Instantiate the agent
agent = TD3Agent(env_id="HalfCheetah-v3", exp_name="HalfCheetah-v3-TD3", cfg=cfg.exp_config, policy_state_dict=policy_state_dict)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
## Model Training
### Train the Model and Push to Huggingface_hub
<details close>
<summary>(Click for Details)</summary>
```shell
#Training Your Own Agent
python3 -u train.py
```
**train.py**
```python
from ding.bonus import TD3Agent
from huggingface_ding import push_model_to_hub
# Instantiate the agent
agent = TD3Agent(env_id="HalfCheetah-v3", exp_name="HalfCheetah-v3-TD3")
# Train the agent
return_ = agent.train(step=int(5000000))
# Push model to huggingface hub
push_model_to_hub(
agent=agent.best,
env_name="OpenAI/Gym/MuJoCo",
task_name="HalfCheetah-v3",
algo_name="TD3",
wandb_url=return_.wandb_url,
github_repo_url="https://github.com/opendilab/DI-engine",
github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/td3.html",
github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/mujoco.html",
installation_guide='''
sudo apt update -y \
&& sudo apt install -y \
build-essential \
libgl1-mesa-dev \
libgl1-mesa-glx \
libglew-dev \
libosmesa6-dev \
libglfw3 \
libglfw3-dev \
libsdl2-dev \
libsdl2-image-dev \
libglm-dev \
libfreetype6-dev \
patchelf
mkdir -p ~/.mujoco
wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz -O mujoco.tar.gz
tar -xf mujoco.tar.gz -C ~/.mujoco
echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin" >> ~/.bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin
pip3 install "cython<3"
pip3 install DI-engine[common_env]
''',
usage_file_by_git_clone="./td3/halfcheetah_td3_deploy.py",
usage_file_by_huggingface_ding="./td3/halfcheetah_td3_download.py",
train_file="./td3/halfcheetah_td3.py",
repo_id="OpenDILabCommunity/HalfCheetah-v3-TD3",
create_repo=False
)
```
</details>
**Configuration**
<details close>
<summary>(Click for Details)</summary>
```python
exp_config = {
'env': {
'manager': {
'episode_num': float("inf"),
'max_retry': 1,
'retry_type': 'reset',
'auto_reset': True,
'step_timeout': None,
'reset_timeout': None,
'retry_waiting_time': 0.1,
'cfg_type': 'BaseEnvManagerDict'
},
'stop_value': 11000,
'n_evaluator_episode': 8,
'env_id': 'HalfCheetah-v3',
'collector_env_num': 1,
'evaluator_env_num': 8,
'env_wrapper': 'mujoco_default'
},
'policy': {
'model': {
'twin_critic': True,
'obs_shape': 17,
'action_shape': 6,
'actor_head_hidden_size': 256,
'critic_head_hidden_size': 256,
'action_space': 'regression'
},
'learn': {
'learner': {
'train_iterations': 1000000000,
'dataloader': {
'num_workers': 0
},
'log_policy': True,
'hook': {
'load_ckpt_before_run': '',
'log_show_after_iter': 100,
'save_ckpt_after_iter': 10000,
'save_ckpt_after_run': True
},
'cfg_type': 'BaseLearnerDict'
},
'update_per_collect': 1,
'batch_size': 256,
'learning_rate_actor': 0.001,
'learning_rate_critic': 0.001,
'ignore_done': True,
'target_theta': 0.005,
'discount_factor': 0.99,
'actor_update_freq': 2,
'noise': True,
'noise_sigma': 0.2,
'noise_range': {
'min': -0.5,
'max': 0.5
}
},
'collect': {
'collector': {},
'unroll_len': 1,
'noise_sigma': 0.1,
'n_sample': 1
},
'eval': {
'evaluator': {
'eval_freq': 5000,
'render': {
'render_freq': -1,
'mode': 'train_iter'
},
'figure_path': None,
'cfg_type': 'InteractionSerialEvaluatorDict',
'stop_value': 11000,
'n_episode': 8
}
},
'other': {
'replay_buffer': {
'replay_buffer_size': 1000000
}
},
'on_policy': False,
'cuda': True,
'multi_gpu': False,
'bp_update_sync': True,
'traj_len_inf': False,
'type': 'td3',
'priority': False,
'priority_IS_weight': False,
'random_collect_size': 25000,
'transition_with_policy_data': False,
'action_space': 'continuous',
'reward_batch_norm': False,
'multi_agent': False,
'cfg_type': 'TD3PolicyDict'
},
'exp_name': 'HalfCheetah-v3-TD3',
'seed': 0,
'wandb_logger': {
'gradient_logger': True,
'video_logger': True,
'plot_logger': True,
'action_logger': True,
'return_logger': False
}
}
```
</details>
**Training Procedure**
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/HalfCheetah-v3-TD3)
## Model Information
<!-- Provide the basic links for the model. -->
- **Github Repository:** [repo link](https://github.com/opendilab/DI-engine)
- **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/td3.html)
- **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/HalfCheetah-v3-TD3/blob/main/policy_config.py)
- **Demo:** [video](https://huggingface.co/OpenDILabCommunity/HalfCheetah-v3-TD3/blob/main/replay.mp4)
<!-- Provide the size information for the model. -->
- **Parameters total size:** 1690.06 KB
- **Last Update Date:** 2023-09-20
## Environments
<!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
- **Benchmark:** OpenAI/Gym/MuJoCo
- **Task:** HalfCheetah-v3
- **Gym version:** 0.25.1
- **DI-engine version:** v0.4.9
- **PyTorch version:** 2.0.1+cu117
- **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/mujoco.html)
|
bangtai/path-to-save-model | bangtai | 2023-09-20T11:16:04Z | 30 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-19T03:17:31Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - bangtai/path-to-save-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
tianzhidao/marian-finetuned-kde4-en-to-fr | tianzhidao | 2023-09-20T11:14:44Z | 59 | 0 | transformers | [
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-20T08:05:48Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_keras_callback
model-index:
- name: tianzhidao/marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tianzhidao/marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6857
- Validation Loss: 0.8029
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0612 | 0.8783 | 0 |
| 0.7974 | 0.8213 | 1 |
| 0.6857 | 0.8029 | 2 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
amitonHFace/ppo-LunarLander-v2 | amitonHFace | 2023-09-20T11:13:33Z | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-20T09:59:28Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 215.91 +/- 20.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dim/ruGPT-13b_dolly_oasst1_chip2 | dim | 2023-09-20T11:03:21Z | 3 | 1 | peft | [
"peft",
"region:us"
]
| null | 2023-09-20T11:00:43Z | ---
library_name: peft
---
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch
from peft import PeftModel, PeftConfig
class GoralConversation:
def __init__(
self,
message_template=" <s> {role}\n{content} </s>\n",
system_prompt="Ты — Горал, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.",
start_token_id=1,
bot_token_id=9225,
):
self.message_template = message_template
self.start_token_id = start_token_id
self.bot_token_id = bot_token_id
self.messages = [{"role": "system", "content": system_prompt}]
def get_start_token_id(self):
return self.start_token_id
def get_bot_token_id(self):
return self.bot_token_id
def add_user_message(self, message):
self.messages.append({"role": "user", "content": message})
def add_bot_message(self, message):
self.messages.append({"role": "bot", "content": message})
def get_prompt(self, tokenizer):
final_text = ""
for message in self.messages:
message_text = self.message_template.format(**message)
final_text += message_text
final_text += tokenizer.decode(
[
self.start_token_id,
]
)
final_text += " "
final_text += tokenizer.decode([self.bot_token_id])
return final_text.strip()
def generate(model, tokenizer, prompt, generation_config):
data = tokenizer(
prompt,
return_tensors="pt",
truncation=True,
max_length=2048,
)
data = {k: v.to(model.device) for k, v in data.items()}
output_ids = model.generate(**data, generation_config=generation_config)[0]
output_ids = output_ids[len(data["input_ids"][0]) :]
output = tokenizer.decode(output_ids, skip_special_tokens=True)
return output.strip()
weights_path = "dim/ruGPT-13b_dolly_oasst1_chip2"
access_token = ""
config = PeftConfig.from_pretrained(weights_path)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map={"": 0},
token=access_token,
)
model = PeftModel.from_pretrained(
model,
weights_path,
torch_dtype=torch.float16,
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(weights_path)
generation_config = GenerationConfig.from_pretrained(weights_path)
generation_config.do_sample = False
inp = "Напишите интересный пост в блоге о недавней поездке на Гавайи, рассказывая о культурном опыте и достопримечательностях, которые обязательно нужно увидеть."
conversation = GoralConversation(
start_token_id=2,
bot_token_id=46787,
)
conversation.add_user_message(inp)
prompt = conversation.get_prompt(tokenizer)
output = generate(model, tokenizer, prompt, generation_config)
print(inp)
print(output)
# Гавайские острова - это рай для любителей природы, культуры и приключений. Это место, где можно исследовать тропические леса, плавать с дельфинами, кататься на волнах прибоя или просто наслаждаться солнцем и пляжем. В дополнение к этому, здесь есть множество достопримечательностей, которые стоит посетить, таких как вулканы Мауна-Лоа и Килауэа, водопады Ваймеа, пещеры Халеакала и многое другое. Если вы ищете что-то более расслабляющее, то посещение гавайских ресторанов и баров может быть отличным способом провести время.
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
amiiin/methods100 | amiiin | 2023-09-20T10:56:44Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:finetune:ybelkada/falcon-7b-sharded-bf16",
"region:us"
]
| null | 2023-09-20T10:40:45Z | ---
base_model: ybelkada/falcon-7b-sharded-bf16
tags:
- generated_from_trainer
model-index:
- name: methods100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# methods100
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
tvganesh/test_trainer | tvganesh | 2023-09-20T10:52:48Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:tvganesh/test_trainer",
"base_model:finetune:tvganesh/test_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-14T07:05:08Z | ---
license: apache-2.0
base_model: tvganesh/test_trainer
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [tvganesh/test_trainer](https://huggingface.co/tvganesh/test_trainer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Rouge1: 0.8325
- Rouge2: 0.8187
- Rougel: 0.8294
- Rougelsum: 0.8312
- Gen Len: 18.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0056
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 5 | 0.2345 | 0.7001 | 0.6536 | 0.6998 | 0.6957 | 16.3 |
| No log | 2.0 | 10 | 0.1472 | 0.7958 | 0.7695 | 0.7929 | 0.7965 | 18.3 |
| No log | 3.0 | 15 | 0.1174 | 0.7196 | 0.6705 | 0.7187 | 0.7118 | 16.3 |
| No log | 4.0 | 20 | 0.0554 | 0.7977 | 0.774 | 0.7907 | 0.7958 | 18.6 |
| No log | 5.0 | 25 | 0.0725 | 0.8205 | 0.8074 | 0.8188 | 0.8212 | 18.6 |
| No log | 6.0 | 30 | 0.0281 | 0.8114 | 0.7929 | 0.8098 | 0.8123 | 18.6 |
| No log | 7.0 | 35 | 0.0451 | 0.7959 | 0.7678 | 0.7908 | 0.7945 | 18.6 |
| No log | 8.0 | 40 | 0.0438 | 0.8285 | 0.8061 | 0.8205 | 0.8227 | 18.5 |
| No log | 9.0 | 45 | 0.0178 | 0.8249 | 0.8109 | 0.8225 | 0.8243 | 18.6 |
| No log | 10.0 | 50 | 0.0072 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 11.0 | 55 | 0.0119 | 0.8336 | 0.8217 | 0.8315 | 0.833 | 18.6 |
| No log | 12.0 | 60 | 0.0104 | 0.8336 | 0.8217 | 0.8315 | 0.833 | 18.6 |
| No log | 13.0 | 65 | 0.0031 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 14.0 | 70 | 0.0099 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 15.0 | 75 | 0.0067 | 0.8284 | 0.8053 | 0.8213 | 0.8226 | 18.6 |
| No log | 16.0 | 80 | 0.0019 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 17.0 | 85 | 0.0173 | 0.8143 | 0.798 | 0.8111 | 0.8102 | 18.2 |
| No log | 18.0 | 90 | 0.0007 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 19.0 | 95 | 0.0004 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 20.0 | 100 | 0.0195 | 0.8325 | 0.813 | 0.8294 | 0.8312 | 18.6 |
| No log | 21.0 | 105 | 0.0057 | 0.8325 | 0.813 | 0.8294 | 0.8312 | 18.6 |
| No log | 22.0 | 110 | 0.0005 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 23.0 | 115 | 0.0010 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 24.0 | 120 | 0.0003 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 25.0 | 125 | 0.0004 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 26.0 | 130 | 0.0005 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 27.0 | 135 | 0.0002 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 28.0 | 140 | 0.0001 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 29.0 | 145 | 0.0010 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 30.0 | 150 | 0.0003 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 31.0 | 155 | 0.0001 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 32.0 | 160 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 33.0 | 165 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 34.0 | 170 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 35.0 | 175 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 36.0 | 180 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 37.0 | 185 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 38.0 | 190 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 39.0 | 195 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 40.0 | 200 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
DataBindu/swinv2-large-patch4-window12to24-192to384-22kto1k-ft-microbes-merged | DataBindu | 2023-09-20T10:43:19Z | 149 | 0 | transformers | [
"transformers",
"pytorch",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft",
"base_model:finetune:microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-19T19:58:12Z | ---
license: apache-2.0
base_model: microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swinv2-large-patch4-window12to24-192to384-22kto1k-ft-microbes-merged
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7268518518518519
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-large-patch4-window12to24-192to384-22kto1k-ft-microbes-merged
This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft](https://huggingface.co/microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8626
- Accuracy: 0.7269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.8355 | 0.98 | 15 | 2.5831 | 0.3333 |
| 1.9292 | 1.97 | 30 | 1.6850 | 0.5046 |
| 1.4121 | 2.95 | 45 | 1.2324 | 0.5972 |
| 1.0121 | 4.0 | 61 | 1.0345 | 0.6852 |
| 0.854 | 4.98 | 76 | 0.9663 | 0.6806 |
| 0.701 | 5.97 | 91 | 0.9587 | 0.6991 |
| 0.5956 | 6.95 | 106 | 0.8626 | 0.7269 |
| 0.5713 | 7.87 | 120 | 0.8645 | 0.7222 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.3
|
CyberHarem/pekora_jashinchandropkick | CyberHarem | 2023-09-20T10:41:30Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/pekora_jashinchandropkick",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-20T10:23:18Z | ---
license: mit
datasets:
- CyberHarem/pekora_jashinchandropkick
pipeline_tag: text-to-image
tags:
- art
---
# Lora of pekora_jashinchandropkick
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 8960, you need to download `8960/pekora_jashinchandropkick.pt` as the embedding and `8960/pekora_jashinchandropkick.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 8960**, with the score of 0.891. The trigger words are:
1. `pekora_jashinchandropkick`
2. `short_hair, hairband, blonde_hair, black_hairband, capelet, shaded_face, shiny_hair, hair_between_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 9600 | 0.883 | [Download](9600/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9600/previews/nude.png) | [<NSFW, click to see>](9600/previews/nude2.png) |  |  |
| **8960** | **0.891** | [**Download**](8960/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8960/previews/nude.png) | [<NSFW, click to see>](8960/previews/nude2.png) |  |  |
| 8320 | 0.861 | [Download](8320/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8320/previews/nude.png) | [<NSFW, click to see>](8320/previews/nude2.png) |  |  |
| 7680 | 0.877 | [Download](7680/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7680/previews/nude.png) | [<NSFW, click to see>](7680/previews/nude2.png) |  |  |
| 7040 | 0.875 | [Download](7040/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7040/previews/nude.png) | [<NSFW, click to see>](7040/previews/nude2.png) |  |  |
| 6400 | 0.865 | [Download](6400/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6400/previews/nude.png) | [<NSFW, click to see>](6400/previews/nude2.png) |  |  |
| 5760 | 0.841 | [Download](5760/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) |  |  |
| 5120 | 0.773 | [Download](5120/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5120/previews/nude.png) | [<NSFW, click to see>](5120/previews/nude2.png) |  |  |
| 4480 | 0.767 | [Download](4480/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4480/previews/nude.png) | [<NSFW, click to see>](4480/previews/nude2.png) |  |  |
| 3840 | 0.798 | [Download](3840/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) |  |  |
| 3200 | 0.714 | [Download](3200/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3200/previews/nude.png) | [<NSFW, click to see>](3200/previews/nude2.png) |  |  |
| 2560 | 0.701 | [Download](2560/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2560/previews/nude.png) | [<NSFW, click to see>](2560/previews/nude2.png) |  |  |
| 1920 | 0.717 | [Download](1920/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) |  |  |
| 1280 | 0.614 | [Download](1280/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1280/previews/nude.png) | [<NSFW, click to see>](1280/previews/nude2.png) |  |  |
| 640 | 0.485 | [Download](640/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](640/previews/nude.png) | [<NSFW, click to see>](640/previews/nude2.png) |  |  |
|
dim/llama2_7b_dolly_oasst1_chip2 | dim | 2023-09-20T10:39:00Z | 2 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-20T10:21:13Z | ---
library_name: peft
---
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch
from peft import PeftModel, PeftConfig
class GoralConversation:
def __init__(
self,
message_template=" <s> {role}\n{content} </s>\n",
system_prompt="Ты — Горал, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.",
start_token_id=1,
bot_token_id=9225,
):
self.message_template = message_template
self.start_token_id = start_token_id
self.bot_token_id = bot_token_id
self.messages = [{"role": "system", "content": system_prompt}]
def get_start_token_id(self):
return self.start_token_id
def get_bot_token_id(self):
return self.bot_token_id
def add_user_message(self, message):
self.messages.append({"role": "user", "content": message})
def add_bot_message(self, message):
self.messages.append({"role": "bot", "content": message})
def get_prompt(self, tokenizer):
final_text = ""
for message in self.messages:
message_text = self.message_template.format(**message)
final_text += message_text
final_text += tokenizer.decode(
[
self.start_token_id,
]
)
final_text += " "
final_text += tokenizer.decode([self.bot_token_id])
return final_text.strip()
def generate(model, tokenizer, prompt, generation_config):
data = tokenizer(
prompt,
return_tensors="pt",
truncation=True,
max_length=2048,
)
data = {k: v.to(model.device) for k, v in data.items()}
output_ids = model.generate(**data, generation_config=generation_config)[0]
output_ids = output_ids[len(data["input_ids"][0]) :]
output = tokenizer.decode(output_ids, skip_special_tokens=True)
return output.strip()
weights_path = "dim/llama2_7b_dolly_oasst1_chip2"
access_token = ""
config = PeftConfig.from_pretrained(weights_path)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map={"": 0},
token=access_token,
)
model = PeftModel.from_pretrained(
model,
weights_path,
torch_dtype=torch.float16,
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(weights_path)
generation_config = GenerationConfig.from_pretrained(weights_path)
generation_config.do_sample = False
inp = "Напишите интересный пост в блоге о недавней поездке на Гавайи, рассказывая о культурном опыте и достопримечательностях, которые обязательно нужно увидеть."
conversation = GoralConversation(
start_token_id=1,
bot_token_id=9225,
)
conversation.add_user_message(inp)
prompt = conversation.get_prompt(tokenizer)
output = generate(model, tokenizer, prompt, generation_config)
print(inp)
print(output)
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
dim/xglm-4.5b_dolly_oasst1_chip2 | dim | 2023-09-20T10:38:18Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-20T10:37:14Z | ---
library_name: peft
---
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch
from peft import PeftModel, PeftConfig
class GoralConversation:
def __init__(
self,
message_template=" <s> {role}\n{content} </s>\n",
system_prompt="Ты — Горал, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.",
start_token_id=1,
bot_token_id=9225,
):
self.message_template = message_template
self.start_token_id = start_token_id
self.bot_token_id = bot_token_id
self.messages = [{"role": "system", "content": system_prompt}]
def get_start_token_id(self):
return self.start_token_id
def get_bot_token_id(self):
return self.bot_token_id
def add_user_message(self, message):
self.messages.append({"role": "user", "content": message})
def add_bot_message(self, message):
self.messages.append({"role": "bot", "content": message})
def get_prompt(self, tokenizer):
final_text = ""
for message in self.messages:
message_text = self.message_template.format(**message)
final_text += message_text
final_text += tokenizer.decode(
[
self.start_token_id,
]
)
final_text += " "
final_text += tokenizer.decode([self.bot_token_id])
return final_text.strip()
def generate(model, tokenizer, prompt, generation_config):
data = tokenizer(
prompt,
return_tensors="pt",
truncation=True,
max_length=2048,
)
data = {k: v.to(model.device) for k, v in data.items()}
output_ids = model.generate(**data, generation_config=generation_config)[0]
output_ids = output_ids[len(data["input_ids"][0]) :]
output = tokenizer.decode(output_ids, skip_special_tokens=True)
return output.strip()
weights_path = "dim/xglm-4.5b_dolly_oasst1_chip2"
access_token = ""
config = PeftConfig.from_pretrained(weights_path)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map={"": 0},
token=access_token,
)
model = PeftModel.from_pretrained(
model,
weights_path,
torch_dtype=torch.float16,
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(weights_path)
generation_config = GenerationConfig.from_pretrained(weights_path)
generation_config.do_sample = False
inp = "Напишите интересный пост в блоге о недавней поездке на Гавайи, рассказывая о культурном опыте и достопримечательностях, которые обязательно нужно увидеть."
conversation = GoralConversation(
start_token_id=0,
bot_token_id=7425,
)
conversation.add_user_message(inp)
prompt = conversation.get_prompt(tokenizer)
output = generate(model, tokenizer, prompt, generation_config)
print(inp)
print(output)
# Я был там! Это было незабываемое путешествие, которое я никогда не забуду. Мы посетили все основные достопримечательности острова, включая пляжи, вулканы, пещеры, национальные парки и многое другое. Впечатления от посещения были потрясающими, а культура - уникальной. Поездка была отличным способом исследовать остров и узнать больше об истории его жителей. Надеюсь, что вы также захотите посетить это место!
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
DuyTa/Vietnamese_ASR | DuyTa | 2023-09-20T10:36:20Z | 4 | 1 | peft | [
"peft",
"tensorboard",
"base_model:openai/whisper-medium",
"base_model:adapter:openai/whisper-medium",
"region:us"
]
| null | 2023-08-17T14:41:26Z | ---
library_name: peft
base_model: openai/whisper-medium
---
## Training procedure
The following `
bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
bavolesy/ppo-LunarLander-v2 | bavolesy | 2023-09-20T10:21:44Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-20T10:21:22Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.69 +/- 23.10
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Carve/isnet | Carve | 2023-09-20T10:17:12Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2023-01-29T22:51:13Z | ---
license: apache-2.0
---
`isnet.pth` - ISNet model. Trained on DIS v1.0 dataset.
`isnet-97-carveset.pth` - ISNet model. Trained on CarveSet. Achieves 98% on test set.
|
Kendong/bc_flowers | Kendong | 2023-09-20T10:12:39Z | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2023-09-19T13:18:57Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photograph of a BSC vase of ASK flowers on a wooden table
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Kendong/bc_flowers
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photograph of a BSC vase of ASK flowers on a wooden table using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Asheyy/dogbooth | Asheyy | 2023-09-20T10:01:40Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-17T09:20:41Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of [v]dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Asheyy/dogbooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
swaroopajit/git-base-fashion | swaroopajit | 2023-09-20T09:58:46Z | 170 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"git",
"image-text-to-text",
"generated_from_trainer",
"image-to-text",
"base_model:microsoft/git-base",
"base_model:finetune:microsoft/git-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| image-to-text | 2023-09-01T08:19:38Z | ---
license: mit
base_model: microsoft/git-base
tags:
- generated_from_trainer
model-index:
- name: git-base-fashion
results: []
pipeline_tag: image-to-text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-fashion
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3 |
Technotech/sd-prompt-instruct-3b-epoch-0.4-lora | Technotech | 2023-09-20T09:54:02Z | 2 | 0 | peft | [
"peft",
"stable-diffusion",
"instruct",
"magic-prompt",
"en",
"dataset:Technotech/sd-prompt-instruct",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:adapter:openlm-research/open_llama_3b_v2",
"license:apache-2.0",
"region:us"
]
| null | 2023-07-29T15:14:28Z | ---
language:
- en
license: apache-2.0
library_name: peft
tags:
- stable-diffusion
- instruct
- magic-prompt
datasets:
- Technotech/sd-prompt-instruct
base_model: openlm-research/open_llama_3b_v2
---
# Stable Diffusion Prompt Instruct 3B (OpenLlama v2 3B)
Trained for 0.4 epochs (test) on [Technotech/sd-prompt-instruct](https://huggingface.co/datasets/Technotech/sd-prompt-instruct).
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0 |
SHENMU007/neunit_BASE_V9.5.14 | SHENMU007 | 2023-09-20T09:53:21Z | 76 | 0 | transformers | [
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-to-audio | 2023-09-20T08:19:51Z | ---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
phamhungd/GuoZovya | phamhungd | 2023-09-20T09:52:42Z | 0 | 0 | null | [
"onnx",
"vi",
"en",
"license:unknown",
"region:us"
]
| null | 2023-02-28T05:44:19Z | ---
license: unknown
language:
- vi
- en
---
About this version
Model Merge :
A to Zovya RPG Artist's Tools
GuoFeng2
#Note: Đây là bản mix 5-5 từ 2 model yêu thích nhất của mình, hiệu quả tạo ra thực sự tuyệt vời, các gương mặt mẫu gần như là hoàn hảo với thông số mặc định.Nó gần như hoàn hảo với đầy đủ các phong cách khác.Với model này, gần như mình đã không cần phải suy nghĩ tới model khac để thực hiện nhiều bức hình với các thể loại khác nhau. Mong mọi người có thể ủng hộ và khám phá hết tiềm năng của model này.
#Note : This is a 5-5 mix from my 2 favorite models, the effect is really great, the model faces are almost perfect with default settings. It's almost perfect with a full range of styles. another way. With this model, I almost didn't need to think about other models to take many pictures with different genres. Hope everyone can support and explore the full potential of this model.
|
mitch7w/falcon-7b-midjourney-example | mitch7w | 2023-09-20T09:41:16Z | 0 | 0 | peft | [
"peft",
"text-generation",
"dataset:tiiuae/falcon-refinedweb",
"license:openrail",
"region:us"
]
| text-generation | 2023-09-20T07:26:15Z | ---
license: openrail
library_name: peft
pipeline_tag: text-generation
datasets:
- tiiuae/falcon-refinedweb
model_parameters:
base_model_name_or_path: vilsonrodrigues/falcon-7b-instruct-sharded
bias: none
fan_in_fan_out: false
inference_mode: true
init_lora_weights: true
lora_alpha: 32
lora_dropout: 0.05
peft_type: LORA
r: 16
target_modules:
- query_key_value
task_type: CAUSAL_LM
--- |
hidude562/OpenMusenet-LContext-2.11 | hidude562 | 2023-09-20T09:39:39Z | 148 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-05T09:26:35Z | ---
pipeline_tag: text-generation
--- |
ShivamMangale/XLM-Roberta-base-finetuned-squad-syn-first-now-squad-ent-gen-syn | ShivamMangale | 2023-09-20T09:34:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-20T09:00:27Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-finetuned-squad-syn-first-now-squad-ent-gen-syn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-finetuned-squad-syn-first-now-squad-ent-gen-syn
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
haseong8012/wav2vec2-large-xlsr-53_ko | haseong8012 | 2023-09-20T09:33:28Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:zeroth_korean",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-20T03:43:03Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
datasets:
- zeroth_korean
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-53-fine-tune_korean_byAILAB
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: zeroth_korean
type: zeroth_korean
config: clean
split: test
args: clean
metrics:
- name: Wer
type: wer
value: 0.8577021532901672
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-fine-tune_korean_byAILAB
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the zeroth_korean dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4406
- Wer: 0.8577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.99 | 38 | 54.3133 | 1.0 |
| No log | 2.0 | 77 | 33.5397 | 1.0 |
| No log | 2.99 | 115 | 19.6459 | 1.0 |
| No log | 4.0 | 154 | 11.1346 | 1.0 |
| No log | 4.99 | 192 | 5.8854 | 1.0 |
| No log | 6.0 | 231 | 4.8784 | 1.0 |
| No log | 6.99 | 269 | 4.8369 | 1.0 |
| No log | 8.0 | 308 | 4.8535 | 1.0 |
| No log | 8.99 | 346 | 4.8388 | 1.0 |
| No log | 10.0 | 385 | 4.8360 | 1.0 |
| 15.1801 | 10.99 | 423 | 4.7653 | 1.0 |
| 15.1801 | 12.0 | 462 | 4.7385 | 1.0 |
| 15.1801 | 12.99 | 500 | 4.6927 | 0.9989 |
| 15.1801 | 14.0 | 539 | 4.6673 | 0.9991 |
| 15.1801 | 14.99 | 577 | 4.6948 | 0.9991 |
| 15.1801 | 16.0 | 616 | 4.6713 | 0.9991 |
| 15.1801 | 16.99 | 654 | 4.6603 | 1.0 |
| 15.1801 | 18.0 | 693 | 4.6428 | 0.9995 |
| 15.1801 | 18.99 | 731 | 4.6520 | 0.9994 |
| 15.1801 | 20.0 | 770 | 4.6554 | 0.9967 |
| 4.3888 | 20.99 | 808 | 4.6054 | 0.9998 |
| 4.3888 | 22.0 | 847 | 4.5723 | 0.9976 |
| 4.3888 | 22.99 | 885 | 4.4586 | 0.9967 |
| 4.3888 | 24.0 | 924 | 4.2547 | 0.9934 |
| 4.3888 | 24.99 | 962 | 3.6554 | 0.9931 |
| 4.3888 | 26.0 | 1001 | 2.8387 | 1.0084 |
| 4.3888 | 26.99 | 1039 | 2.4191 | 1.0551 |
| 4.3888 | 28.0 | 1078 | 2.0997 | 1.0197 |
| 4.3888 | 28.99 | 1116 | 2.0103 | 1.0176 |
| 4.3888 | 30.0 | 1155 | 1.8189 | 0.9461 |
| 4.3888 | 30.99 | 1193 | 1.7623 | 0.9726 |
| 2.7217 | 32.0 | 1232 | 1.7383 | 0.9976 |
| 2.7217 | 32.99 | 1270 | 1.6522 | 0.9584 |
| 2.7217 | 34.0 | 1309 | 1.5558 | 0.9193 |
| 2.7217 | 34.99 | 1347 | 1.5811 | 0.9440 |
| 2.7217 | 36.0 | 1386 | 1.5208 | 0.9158 |
| 2.7217 | 36.99 | 1424 | 1.5088 | 0.9038 |
| 2.7217 | 38.0 | 1463 | 1.5039 | 0.9086 |
| 2.7217 | 38.99 | 1501 | 1.4853 | 0.8987 |
| 2.7217 | 40.0 | 1540 | 1.4799 | 0.8847 |
| 2.7217 | 40.99 | 1578 | 1.4259 | 0.8694 |
| 0.7635 | 42.0 | 1617 | 1.4878 | 0.8883 |
| 0.7635 | 42.99 | 1655 | 1.4394 | 0.8693 |
| 0.7635 | 44.0 | 1694 | 1.4623 | 0.8743 |
| 0.7635 | 44.99 | 1732 | 1.4495 | 0.8710 |
| 0.7635 | 46.0 | 1771 | 1.4463 | 0.8655 |
| 0.7635 | 46.99 | 1809 | 1.4553 | 0.8704 |
| 0.7635 | 48.0 | 1848 | 1.4500 | 0.8646 |
| 0.7635 | 48.99 | 1886 | 1.4387 | 0.8566 |
| 0.7635 | 49.35 | 1900 | 1.4406 | 0.8577 |
### Framework versions
- Transformers 4.33.2
- Pytorch 1.12.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/makihara_shiho_idolmastercinderellagirls | CyberHarem | 2023-09-20T09:27:49Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/makihara_shiho_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-20T09:15:47Z | ---
license: mit
datasets:
- CyberHarem/makihara_shiho_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of makihara_shiho_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4420, you need to download `4420/makihara_shiho_idolmastercinderellagirls.pt` as the embedding and `4420/makihara_shiho_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4420**, with the score of 0.882. The trigger words are:
1. `makihara_shiho_idolmastercinderellagirls`
2. `brown_hair, long_hair, green_eyes, smile, open_mouth, blush, bow, breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.859 | [Download](5100/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.861 | [Download](4760/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| **4420** | **0.882** | [**Download**](4420/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.856 | [Download](4080/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.858 | [Download](3740/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.847 | [Download](3400/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.795 | [Download](3060/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.735 | [Download](2720/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.781 | [Download](2380/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.816 | [Download](2040/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.798 | [Download](1700/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.797 | [Download](1360/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.670 | [Download](1020/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.649 | [Download](680/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.455 | [Download](340/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
CyberHarem/hua_yuan_yurine_jashinchandropkick | CyberHarem | 2023-09-20T09:25:31Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/hua_yuan_yurine_jashinchandropkick",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-20T09:09:33Z | ---
license: mit
datasets:
- CyberHarem/hua_yuan_yurine_jashinchandropkick
pipeline_tag: text-to-image
tags:
- art
---
# Lora of hua_yuan_yurine_jashinchandropkick
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 9600, you need to download `9600/hua_yuan_yurine_jashinchandropkick.pt` as the embedding and `9600/hua_yuan_yurine_jashinchandropkick.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 9600**, with the score of 0.950. The trigger words are:
1. `hua_yuan_yurine_jashinchandropkick`
2. `long_hair, twintails, eyepatch, medical_eyepatch, bangs, mini_hat, hat, lolita_fashion, gothic_lolita, red_hair, closed_mouth, brown_eyes, ribbon, brown_hair, black_headwear`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **9600** | **0.950** | [**Download**](9600/hua_yuan_yurine_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9600/previews/nude.png) | [<NSFW, click to see>](9600/previews/nude2.png) |  |  |
| 8960 | 0.925 | [Download](8960/hua_yuan_yurine_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8960/previews/nude.png) | [<NSFW, click to see>](8960/previews/nude2.png) |  |  |
| 8320 | 0.947 | [Download](8320/hua_yuan_yurine_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8320/previews/nude.png) | [<NSFW, click to see>](8320/previews/nude2.png) |  |  |
| 7680 | 0.940 | [Download](7680/hua_yuan_yurine_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7680/previews/nude.png) | [<NSFW, click to see>](7680/previews/nude2.png) |  |  |
| 7040 | 0.933 | [Download](7040/hua_yuan_yurine_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7040/previews/nude.png) | [<NSFW, click to see>](7040/previews/nude2.png) |  |  |
| 6400 | 0.940 | [Download](6400/hua_yuan_yurine_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6400/previews/nude.png) | [<NSFW, click to see>](6400/previews/nude2.png) |  |  |
| 5760 | 0.922 | [Download](5760/hua_yuan_yurine_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) |  |  |
| 5120 | 0.931 | [Download](5120/hua_yuan_yurine_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5120/previews/nude.png) | [<NSFW, click to see>](5120/previews/nude2.png) |  |  |
| 4480 | 0.891 | [Download](4480/hua_yuan_yurine_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4480/previews/nude.png) | [<NSFW, click to see>](4480/previews/nude2.png) |  |  |
| 3840 | 0.896 | [Download](3840/hua_yuan_yurine_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) |  |  |
| 3200 | 0.893 | [Download](3200/hua_yuan_yurine_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3200/previews/nude.png) | [<NSFW, click to see>](3200/previews/nude2.png) |  |  |
| 2560 | 0.861 | [Download](2560/hua_yuan_yurine_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2560/previews/nude.png) | [<NSFW, click to see>](2560/previews/nude2.png) |  |  |
| 1920 | 0.837 | [Download](1920/hua_yuan_yurine_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) |  |  |
| 1280 | 0.733 | [Download](1280/hua_yuan_yurine_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1280/previews/nude.png) | [<NSFW, click to see>](1280/previews/nude2.png) |  |  |
| 640 | 0.635 | [Download](640/hua_yuan_yurine_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](640/previews/nude.png) | [<NSFW, click to see>](640/previews/nude2.png) |  |  |
|
hongyin/chat-awareness-0.8b | hongyin | 2023-09-20T09:24:23Z | 149 | 1 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"en",
"zh",
"arxiv:2302.13173",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-07T12:00:00Z | ---
language:
- en
- zh
pipeline_tag: text-generation
---
## hongyin/chat-awareness-0.8b
This is a lightweight English-Chinese bilingual chat assistant with a parameter size of 0.8b, dedicated to reducing the computational cost of inference. The training process consists of two parts: (1) NTP task. (2) Instruction tuning.
```python
### Human: Paraphrasing the text I love you.
### Assistant: I have a crash on you.
```
## Bibtex entry and citation info
Please cite if you find it helpful.
```
@article{zhu2023metaaid,
title={MetaAID 2.0: An Extensible Framework for Developing Metaverse Applications via Human-controllable Pre-trained Models},
author={Zhu, Hongyin},
journal={arXiv preprint arXiv:2302.13173},
year={2023}
}
```
---
license: other
---
|
ktadzjibov/opus-mt-en-ru-finetuned-en-to-ru-amls | ktadzjibov | 2023-09-20T09:19:34Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"base_model:Helsinki-NLP/opus-mt-en-ru",
"base_model:finetune:Helsinki-NLP/opus-mt-en-ru",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-16T21:24:50Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-ru
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-ru-finetuned-en-to-ru-amls
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
config: ru-en
split: validation
args: ru-en
metrics:
- name: Bleu
type: bleu
value: 28.7642
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ru-finetuned-en-to-ru-amls
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ru](https://huggingface.co/Helsinki-NLP/opus-mt-en-ru) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2832
- Bleu: 28.7642
- Gen Len: 29.495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.9806 | 1.0 | 94761 | 1.2832 | 28.7642 | 29.495 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
kligvasser/ffhq-vqvae-clip-landmark-arcface | kligvasser | 2023-09-20T09:18:41Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"diffusers:ConditionalDDPMPipeline",
"region:us"
]
| null | 2023-09-14T07:41:15Z | Please refer [facial-diffusion](https://github.com/kligvasser/facial-diffusion) for evaluation and training examples.
Condition:

Sample:
 |
MattyB95/pre_trained_DF_RawNet2 | MattyB95 | 2023-09-20T09:09:36Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2023-09-17T22:03:34Z | ---
license: mit
---
## Citation
If you use this code in your research please use the following citation:
```bibtex
@INPROCEEDINGS{9414234,
author={Tak, Hemlata and Patino, Jose and Todisco, Massimiliano and Nautsch, Andreas and Evans, Nicholas and Larcher, Anthony},
booktitle={IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={End-to-End anti-spoofing with RawNet2},
year={2021},
pages={6369-6373}
}
```
|
TheBloke/LLaMA-7b-GGUF | TheBloke | 2023-09-20T09:03:53Z | 580 | 7 | transformers | [
"transformers",
"gguf",
"llama",
"license:other",
"region:us"
]
| null | 2023-09-20T02:27:21Z | ---
base_model: https://ai.meta.com/blog/large-language-model-llama-meta-ai
inference: false
license: other
model_creator: Meta
model_name: LLaMA 7B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# LLaMA 7B - GGUF
- Model creator: [Meta](https://huggingface.co/none)
- Original model: [LLaMA 7B](https://ai.meta.com/blog/large-language-model-llama-meta-ai)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Meta's LLaMA 7b](https://ai.meta.com/blog/large-language-model-llama-meta-ai).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LLaMA-7b-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA-7b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA-7b-GGUF)
* [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/huggyllama/llama-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-7b.Q2_K.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [llama-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [llama-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [llama-7b.Q4_0.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [llama-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [llama-7b.Q5_0.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [llama-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [llama-7b.Q6_K.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [llama-7b.Q8_0.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/LLaMA-7b-GGUF and below it, a specific filename to download, such as: llama-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/LLaMA-7b-GGUF llama-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/LLaMA-7b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LLaMA-7b-GGUF llama-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/LLaMA-7b-GGUF", model_file="llama-7b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Meta's LLaMA 7b
This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
<!-- original-model-card end -->
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-09-19_baseline_gates | jordyvl | 2023-09-20T08:54:28Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-19T18:46:12Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-09-19_baseline_gates
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-09-19_baseline_gates
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8358
- Accuracy: 0.6925
- Exit 0 Accuracy: 0.065
- Exit 1 Accuracy: 0.0625
- Exit 2 Accuracy: 0.0625
- Exit 3 Accuracy: 0.0625
- Exit 4 Accuracy: 0.0625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 12
- total_train_batch_size: 288
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| No log | 0.71 | 2 | 6.0791 | 0.0675 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 1.71 | 4 | 5.8341 | 0.1125 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 2.71 | 6 | 5.6298 | 0.1425 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 3.71 | 8 | 5.4519 | 0.1875 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 4.71 | 10 | 5.2664 | 0.1925 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 5.71 | 12 | 5.0830 | 0.2075 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 6.71 | 14 | 4.8990 | 0.215 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 7.71 | 16 | 4.7073 | 0.215 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 8.71 | 18 | 4.5002 | 0.1925 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 9.71 | 20 | 4.4009 | 0.2225 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 10.71 | 22 | 4.2853 | 0.23 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 11.71 | 24 | 4.0933 | 0.27 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 12.71 | 26 | 3.9761 | 0.27 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 13.71 | 28 | 3.9351 | 0.3025 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 14.71 | 30 | 3.8232 | 0.32 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 15.71 | 32 | 3.7217 | 0.345 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 16.71 | 34 | 3.6380 | 0.39 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 17.71 | 36 | 3.6336 | 0.3925 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 18.71 | 38 | 3.5816 | 0.415 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 19.71 | 40 | 3.4517 | 0.445 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 20.71 | 42 | 3.4066 | 0.475 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 21.71 | 44 | 3.3252 | 0.4825 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 22.71 | 46 | 3.3034 | 0.5075 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 23.71 | 48 | 3.2461 | 0.5275 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 24.71 | 50 | 3.2623 | 0.54 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 25.71 | 52 | 3.1701 | 0.545 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 26.71 | 54 | 3.2183 | 0.545 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 27.71 | 56 | 3.1410 | 0.5625 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 28.71 | 58 | 3.1476 | 0.56 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 29.71 | 60 | 3.0950 | 0.585 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 30.71 | 62 | 3.0338 | 0.605 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 31.71 | 64 | 3.0406 | 0.6 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 32.71 | 66 | 3.0577 | 0.615 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 33.71 | 68 | 3.0389 | 0.6125 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 34.71 | 70 | 3.0199 | 0.62 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 35.71 | 72 | 3.0103 | 0.645 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 36.71 | 74 | 2.9517 | 0.645 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 37.71 | 76 | 2.9726 | 0.645 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 38.71 | 78 | 2.9587 | 0.66 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 39.71 | 80 | 2.9024 | 0.6725 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 40.71 | 82 | 2.9354 | 0.655 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 41.71 | 84 | 2.8976 | 0.6775 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 42.71 | 86 | 2.8988 | 0.6875 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 43.71 | 88 | 2.8944 | 0.6825 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 44.71 | 90 | 2.8853 | 0.6825 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 45.71 | 92 | 2.8685 | 0.6825 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 46.71 | 94 | 2.8623 | 0.665 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 47.71 | 96 | 2.8534 | 0.675 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 48.71 | 98 | 2.8601 | 0.685 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 49.71 | 100 | 2.8378 | 0.6875 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 50.71 | 102 | 2.8203 | 0.6875 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 51.71 | 104 | 2.8153 | 0.69 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 52.71 | 106 | 2.8077 | 0.6925 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 53.71 | 108 | 2.8061 | 0.69 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 54.71 | 110 | 2.8099 | 0.69 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 55.71 | 112 | 2.8061 | 0.6875 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 56.71 | 114 | 2.8268 | 0.69 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 57.71 | 116 | 2.8331 | 0.69 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 58.71 | 118 | 2.8371 | 0.69 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 59.71 | 120 | 2.8358 | 0.6925 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Subsets and Splits