modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-13 06:28:01
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-13 06:25:04
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Venki-ds/outputs | Venki-ds | 2023-09-19T13:12:11Z | 3 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-19T13:11:45Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
lapki/Llama-2-7b-panorama-QLoRA | lapki | 2023-09-19T13:01:53Z | 7 | 1 | peft | [
"peft",
"llama",
"llama-2",
"news",
"text-generation",
"ru",
"dataset:its5Q/panorama",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
]
| text-generation | 2023-07-28T13:24:15Z | ---
language:
- ru
library_name: peft
tags:
- llama
- llama-2
- news
datasets:
- its5Q/panorama
pipeline_tag: text-generation
base_model: meta-llama/Llama-2-7b-hf
---
# Llama 2 7B, fine-tuned on Panorama media
This repo contains the QLoRA adapter.
Prompt:
```
Write a hypothetical news story based on the given headline
### Title:
{prompt}
Text:
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
### Additional information
Thanks [its5Q](https://huggingface.co/its5Q) for dataset and help |
ctu-aic/lora-xlm-roberta-large-squad2-csfever_v2-f1 | ctu-aic | 2023-09-19T13:00:53Z | 12 | 1 | peft | [
"peft",
"text-classification",
"cs",
"dataset:ctu-aic/csfever_v2",
"base_model:deepset/xlm-roberta-large-squad2",
"base_model:adapter:deepset/xlm-roberta-large-squad2",
"license:cc-by-sa-4.0",
"region:us"
]
| text-classification | 2023-07-25T14:03:38Z | ---
language:
- cs
license: cc-by-sa-4.0
library_name: peft
datasets:
- ctu-aic/csfever_v2
metrics:
- accuracy
- f1
- recall
- precision
pipeline_tag: text-classification
base_model: deepset/xlm-roberta-large-squad2
---
# Model card for lora-xlm-roberta-large-squad2-csfever_v2-f1
## Model details
Model for natural language inference.
## Training procedure
### Framework versions
- PEFT 0.4.0
## Uses
### PEFT (Transformers)
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSequenceClassification, Pipeline, AutoTokenizer
config = PeftConfig.from_pretrained("ctu-aic/lora-xlm-roberta-large-squad2-csfever_v2-f1")
model = AutoModelForSequenceClassification.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, config)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
#pipeline for NLI
class NliPipeline(Pipeline):
def _sanitize_parameters(self, **kwargs):
preprocess_kwargs = {}
if "evidence" in kwargs:
preprocess_kwargs["evidence"] = kwargs["evidence"]
return preprocess_kwargs, {}, {}
def preprocess(self, claim, evidence=""):
model_input = self.tokenizer(claim, evidence, return_tensors=self.framework, truncation=True)
return model_input
def _forward(self, model_inputs):
outputs = self.model(**model_inputs)
return outputs
def postprocess(self, model_outputs):
logits = model_outputs.logits
predictions = torch.argmax(logits, dim=-1)
return {"logits": logits, "label": int(predictions[0])}
nli_pipeline = NliPipeline(model=model, tokenizer=tokenizer)
nli_pipeline("claim", "evidence")
``` |
Gustrd/open-llama-13b-lora-cabra-adapter | Gustrd | 2023-09-19T12:59:53Z | 6 | 2 | peft | [
"peft",
"pt",
"dataset:Gustrd/dolly-15k-libretranslate-pt",
"base_model:VMware/open-llama-13b-open-instruct",
"base_model:adapter:VMware/open-llama-13b-open-instruct",
"license:cc-by-sa-3.0",
"region:us"
]
| null | 2023-07-18T19:57:27Z | ---
language:
- pt
license: cc-by-sa-3.0
library_name: peft
datasets:
- Gustrd/dolly-15k-libretranslate-pt
base_model: VMware/open-llama-13b-open-instruct
---
# Cabra: A portuguese finetuned instruction Open-LLaMA
LoRA adapter created with the procedures detailed at the GitHub repository: https://github.com/gustrd/cabra .
This training was done at 2 epochs using one A4000 at Paperspace.
The GGML version was created with llama.cpp "convert-lora-to-ggml.py".
This LoRA adapter was created following the procedure
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0 |
Kendong/ad_dog | Kendong | 2023-09-19T12:59:53Z | 4 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-19T12:48:16Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Kendong/ad_dog
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
fetiska/be_healthy | fetiska | 2023-09-19T12:56:24Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-19T12:15:32Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 19.00 +/- 0.00
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r fetiska/be_healthy
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=be_healthy
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=be_healthy --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
pengold/distilbert-base-vietnamese-case | pengold | 2023-09-19T12:55:26Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-15T13:15:53Z | ---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-vietnamese-case
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-vietnamese-case
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.6995 | 1.0 | 313 | 5.7838 |
| 5.7246 | 2.0 | 626 | 5.5341 |
| 5.4565 | 3.0 | 939 | 5.3280 |
| 5.271 | 4.0 | 1252 | 5.1409 |
| 5.0514 | 5.0 | 1565 | 4.9143 |
| 4.874 | 6.0 | 1878 | 4.7130 |
| 4.7083 | 7.0 | 2191 | 4.5682 |
| 4.5677 | 8.0 | 2504 | 4.3724 |
| 4.4244 | 9.0 | 2817 | 4.3262 |
| 4.3013 | 10.0 | 3130 | 4.1231 |
| 4.2077 | 11.0 | 3443 | 4.1388 |
| 4.1103 | 12.0 | 3756 | 3.8696 |
| 4.0141 | 13.0 | 4069 | 3.8849 |
| 3.9435 | 14.0 | 4382 | 3.7311 |
| 3.8604 | 15.0 | 4695 | 3.7155 |
| 3.804 | 16.0 | 5008 | 3.6445 |
| 3.7076 | 17.0 | 5321 | 3.5784 |
| 3.6807 | 18.0 | 5634 | 3.5516 |
| 3.6239 | 19.0 | 5947 | 3.4008 |
| 3.5729 | 20.0 | 6260 | 3.4827 |
| 3.5308 | 21.0 | 6573 | 3.3921 |
| 3.4707 | 22.0 | 6886 | 3.3729 |
| 3.4341 | 23.0 | 7199 | 3.3543 |
| 3.3989 | 24.0 | 7512 | 3.2836 |
| 3.3505 | 25.0 | 7825 | 3.3003 |
| 3.3256 | 26.0 | 8138 | 3.1750 |
| 3.2892 | 27.0 | 8451 | 3.1930 |
| 3.2614 | 28.0 | 8764 | 3.2089 |
| 3.2387 | 29.0 | 9077 | 3.1978 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Sn4kehead/TransNetV2 | Sn4kehead | 2023-09-19T12:55:21Z | 0 | 3 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2023-09-19T12:52:34Z | ---
license: apache-2.0
---
# TransNet V2: Shot Boundary Detection Neural Network
pytorch weights for TransNetV2
https://github.com/soCzech/TransNetV2 |
checkiejan/prefix-paraphase-50-20-auto | checkiejan | 2023-09-19T12:49:54Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-19T12:49:52Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
xianglingjing/llama-2-7b-int4-text-to-sql-LoRA | xianglingjing | 2023-09-19T12:41:20Z | 24 | 1 | peft | [
"peft",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
]
| null | 2023-08-24T18:32:18Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
yhk/speecht5_tts_voxpopuli_nl | yhk | 2023-09-19T12:34:19Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"test",
"generated_from_trainer",
"nl",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-to-audio | 2023-09-18T11:00:52Z | ---
language:
- nl
license: mit
base_model: microsoft/speecht5_tts
tags:
- test
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5205 | 4.3 | 1000 | 0.4807 |
| 0.5018 | 8.6 | 2000 | 0.4657 |
| 0.4992 | 12.9 | 3000 | 0.4607 |
| 0.4939 | 17.2 | 4000 | 0.4590 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
jiwon65/whisper-small_korean-zeroth | jiwon65 | 2023-09-19T12:25:02Z | 44 | 2 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-19T10:42:19Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-korr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-korr
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3466
- Wer: 19.9610
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3119 | 0.69 | 100 | 0.3334 | 20.6884 |
| 0.1223 | 1.39 | 200 | 0.3179 | 21.4336 |
| 0.0757 | 2.08 | 300 | 0.3234 | 20.3158 |
| 0.0349 | 2.77 | 400 | 0.3329 | 20.8481 |
| 0.0172 | 3.47 | 500 | 0.3354 | 20.1916 |
| 0.0059 | 4.16 | 600 | 0.3357 | 19.7480 |
| 0.0057 | 4.85 | 700 | 0.3396 | 19.9965 |
| 0.0046 | 5.55 | 800 | 0.3417 | 19.7658 |
| 0.0025 | 6.24 | 900 | 0.3461 | 20.0497 |
| 0.0029 | 6.93 | 1000 | 0.3466 | 19.9610 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
jasonvan/llama-2-13b-chat-text2sql | jasonvan | 2023-09-19T12:21:50Z | 4 | 1 | peft | [
"peft",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-13b-chat-hf",
"region:us"
]
| null | 2023-07-28T14:06:11Z | ---
library_name: peft
base_model: meta-llama/Llama-2-13b-chat-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
ckmfong/ppo-Huggy | ckmfong | 2023-09-19T12:16:55Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-09-19T12:16:49Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ckmfong/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
lyimo/potato | lyimo | 2023-09-19T12:11:41Z | 0 | 0 | fastai | [
"fastai",
"region:us"
]
| null | 2023-09-19T12:11:34Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
EnDevSols/llama-2-7b-qlora-medical | EnDevSols | 2023-09-19T12:09:15Z | 2 | 0 | peft | [
"peft",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
]
| null | 2023-07-25T13:59:22Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
urbija/ner-bio-annotated-7-1 | urbija | 2023-09-19T12:08:37Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-19T09:43:23Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner-bio-annotated-7-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-bio-annotated-7-1
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2128
- Precision: 0.8028
- Recall: 0.8397
- F1: 0.8209
- Accuracy: 0.9370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 67 | 0.2779 | 0.7497 | 0.7533 | 0.7515 | 0.9107 |
| No log | 2.0 | 134 | 0.2448 | 0.7718 | 0.7888 | 0.7802 | 0.9224 |
| No log | 3.0 | 201 | 0.2289 | 0.7716 | 0.8319 | 0.8006 | 0.9287 |
| No log | 4.0 | 268 | 0.2158 | 0.7995 | 0.8393 | 0.8189 | 0.9362 |
| No log | 5.0 | 335 | 0.2128 | 0.8028 | 0.8397 | 0.8209 | 0.9370 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Gustrd/open-llama-13b-cabra-gtpq-lora-adapter | Gustrd | 2023-09-19T12:08:31Z | 3 | 0 | peft | [
"peft",
"base_model:Gustrd/open-llama-13b-4bit-128g-GPTQ",
"base_model:adapter:Gustrd/open-llama-13b-4bit-128g-GPTQ",
"region:us"
]
| null | 2023-07-17T21:03:28Z | ---
library_name: peft
base_model: Gustrd/open-llama-13b-4bit-128g-GPTQ
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
NursNurs/T5ForReverseDictionary | NursNurs | 2023-09-19T12:08:31Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-13T15:39:22Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
Gustrd/mpt-7b-lora-cabra-adapter | Gustrd | 2023-09-19T12:07:57Z | 8 | 0 | peft | [
"peft",
"pt",
"dataset:Gustrd/dolly-15k-hippo-translated-pt-12k",
"base_model:HachiML/mpt-7b-instruct-for-peft",
"base_model:adapter:HachiML/mpt-7b-instruct-for-peft",
"license:cc-by-3.0",
"region:us"
]
| null | 2023-08-17T18:41:06Z | ---
language:
- pt
license: cc-by-3.0
library_name: peft
datasets:
- Gustrd/dolly-15k-hippo-translated-pt-12k
base_model: HachiML/mpt-7b-instruct-for-peft
---
### Cabra: A portuguese finetuned instruction Open-LLaMA
LoRA adapter created with the procedures detailed at the GitHub repository: https://github.com/gustrd/cabra .
This training was done at 2 epochs using two T4 at Kaggle.
This LoRA adapter was created following the procedure:
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0 |
CyberHarem/imai_kana_idolmastercinderellagirls | CyberHarem | 2023-09-19T12:05:55Z | 0 | 1 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/imai_kana_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-19T11:51:10Z | ---
license: mit
datasets:
- CyberHarem/imai_kana_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of imai_kana_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 2720, you need to download `2720/imai_kana_idolmastercinderellagirls.pt` as the embedding and `2720/imai_kana_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 2720**, with the score of 0.947. The trigger words are:
1. `imai_kana_idolmastercinderellagirls`
2. `twintails, brown_hair, brown_eyes, blush, open_mouth, ribbon, smile, hair_ribbon, breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.898 | [Download](5100/imai_kana_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.881 | [Download](4760/imai_kana_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.905 | [Download](4420/imai_kana_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.896 | [Download](4080/imai_kana_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.897 | [Download](3740/imai_kana_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.905 | [Download](3400/imai_kana_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.922 | [Download](3060/imai_kana_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| **2720** | **0.947** | [**Download**](2720/imai_kana_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.880 | [Download](2380/imai_kana_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.924 | [Download](2040/imai_kana_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.892 | [Download](1700/imai_kana_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.901 | [Download](1360/imai_kana_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.884 | [Download](1020/imai_kana_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.927 | [Download](680/imai_kana_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.843 | [Download](340/imai_kana_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
Xilabs/llama-2-7B-Guanaco-QLoRA | Xilabs | 2023-09-19T12:00:55Z | 4 | 0 | peft | [
"peft",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
]
| null | 2023-07-23T17:46:00Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
lyimo/irishpotato | lyimo | 2023-09-19T11:58:49Z | 0 | 0 | fastai | [
"fastai",
"region:us"
]
| null | 2023-09-19T11:58:41Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Undi95/Storytelling-v1-13B-lora | Undi95 | 2023-09-19T11:57:46Z | 15 | 6 | peft | [
"peft",
"base_model:TheBloke/Llama-2-13B-fp16",
"base_model:adapter:TheBloke/Llama-2-13B-fp16",
"license:other",
"region:us"
]
| null | 2023-09-07T23:39:30Z | ---
license: other
library_name: peft
base_model: TheBloke/Llama-2-13B-fp16
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
I'm NOT the author of this work.
I cite anon :
```shell
Well, here it is. Storytelling Qlora. Trained on base llama2 13B but works flawlessly on other 13Bs. Idk about other sizes.
25MB of nsfw books, 60MB of sfwish ones.
No special formatting other than *** between chapters and ⁂ between books. Takes some text to get going but once you have some context filled, it feels way better for prose than raw llama or instruct models, imho.
Do whatever you want with it, I can't be bothered to maintain a HF page. WTFPL.
It's just shit from nai's archive
```
Credit to "anon49" |
matttvpl/model_v1 | matttvpl | 2023-09-19T11:56:58Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:poquad",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-04-25T15:23:44Z | ---
tags:
- generated_from_trainer
datasets:
- poquad
model-index:
- name: model_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_v1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 334 | 1.4651 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Ioana23/distilbert-base-uncased-finetuned-imdb | Ioana23 | 2023-09-19T11:47:28Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-19T11:33:04Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.10.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
benji1a/openllama-3b-pelt-squad_v2 | benji1a | 2023-09-19T11:43:19Z | 1 | 0 | peft | [
"peft",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:adapter:openlm-research/open_llama_3b_v2",
"region:us"
]
| null | 2023-08-24T17:37:45Z | ---
library_name: peft
base_model: openlm-research/open_llama_3b_v2
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
derguene/saytutension-xlmroberta-v1 | derguene | 2023-09-19T11:32:38Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-09-19T11:31:47Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# derguene/saytutension-xlmroberta-v1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("derguene/saytutension-xlmroberta-v1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
bertin-project/bertin-alpaca-lora-7b | bertin-project | 2023-09-19T11:32:13Z | 9 | 4 | peft | [
"peft",
"text-generation",
"es",
"dataset:bertin-project/alpaca-spanish",
"license:openrail",
"region:us"
]
| text-generation | 2023-03-27T13:58:50Z | ---
language:
- es
license: openrail
library_name: peft
datasets:
- bertin-project/alpaca-spanish
pipeline_tag: text-generation
base_model: decapoda-research/llama-7b-hf
---
# BERTIN-Alpaca-LoRA 7B
This is a Spanish adapter generated by fine-tuning LLaMA-7B on a [Spanish Alpaca](https://huggingface.co/datasets/bertin-project/alpaca-spanish) dataset.
## Usage
```python
from peft import PeftModel
from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig
base_model = "decapoda-research/llama-7b-hf"
tokenizer = LLaMATokenizer.from_pretrained(base_model)
model = LLaMAForCausalLM.from_pretrained(
base_model,
load_in_8bit=True,
device_map="auto",
)
model = PeftModel.from_pretrained(model, "bertin-project/bertin-alpaca-lora-7b")
```
Until `PEFT` is fully supported in Hugginface's pipelines, for generation we can either consolidate the LoRA weights into the LLaMA model weights, or use the adapter's `generate()` method. Remember that the prompt still needs the English template:
```python
# Generate responses
def generate(instruction, input=None):
if input:
prompt = f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. # noqa: E501
### Instruction:
{instruction}
### Input:
{input}
### Response:
"""
else:
prompt = f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. # noqa: E501
### Instruction:
{instruction}
### Response:
"""
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(temperature=0.2, top_p=0.75, num_beams=4),
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=256
)
for seq in generation_output.sequences:
output = tokenizer.decode(seq)
print(output.split("### Response:")[1].strip())
generate("Escribe un correo electrónico dando la bienvenida a un nuevo empleado llamado Manolo.")
# Estimado Manolo,
#
# ¡Bienvenido a nuestro equipo! Estamos muy contentos de que hayas decidido unirse a nosotros y estamos ansiosos por comenzar a trabajar juntos.
#
# Nos gustaría darte las gracias por tu interés en nuestro equipo y esperamos que tengas un gran tiempo aquí.
#
# Si tienes alguna pregunta o duda, no dudes en contactarnos.
#
# Atentamente,
# Equipo de [Nombre del Departamento]
```
## Data
The dataset is a translation to Spanish of [alpaca_data_cleaned.json](https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_cleaned.json) (a clean version of the [Alpaca dataset made at Stanford](https://huggingface.co/datasets/tatsu-lab/alpaca)) using OpenAI's `gpt-3.5-turbo` model. We translated using a full-sample prompt instead of per strings, which resulted in more coherent tuples of `(instruction, input, output)` and costed around $60.0.
**This dataset cannot be used to create models that compete in any way with OpenAI.**
## Finetuning
To fine-tune the LLaMA model we used the code available on [Alpaca Lora](https://github.com/tloen/alpaca-lora), which provides code to finetune the LLaMA model using PEFT from Hugging Face. We run finetuning for 3 epochs using sequence length of 512 with no gradient accumulation on a single A6000 with 48GB VRAM for 12 hours on top of LLaMA-7B.
## Example outputs
Stanford Alpaca:
```
Instruction: Inventa una excusa creativa para decir que no tengo que ir a la fiesta.
Response: I'm sorry, I can't make it to the party. I have to stay home and take care of my sick pet.
```
BERTIN Alpaca LoRA:
```
Instruction: Inventa una excusa creativa para decir que no tengo que ir a la fiesta.
Response: No puedo ir a la fiesta porque tengo que terminar un proyecto urgente.
```
Stanford Alpaca:
```
Instruction: ¿Cómo puedo comenzar una carrera como científico de datos? Responde con una lista de pasos a seguir.
Response: To begin a career as a data scientist, there are a few steps to take. First, you should have a bachelor's degree in a STEM-related field, such as computer science, mathematics, statistics, or engineering. You should also have experience with programming languages such as Python, R, or Java. Additionally, it is helpful to have knowledge of machine learning, data visualization, and data analytics. Finally, it is important to stay up to date with the latest trends and technologies in the field.
```
BERTIN Alpaca LoRA:
```
Instruction: ¿Cómo puedo comenzar una carrera como científico de datos? Responde con una lista de pasos a seguir.
Response: Para comenzar una carrera como científico de datos, hay varios pasos a seguir. Primero, obtener una licenciatura en matemáticas, informática o ciencias de la computación. Luego, obtener una maestría o doctorado en ciencias de la computación, informática o alguna otra área relacionada. Finalmente, obtener experiencia en el campo trabajando en proyectos de investigación o desarrollando aplicaciones.
```
You can test it using the eval notebook [here](https://colab.research.google.com/github/22-hours/cabrita/blob/main/notebooks/cabrita-lora.ipynb).
## References
- [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [BERTIN Alpaca](https://huggingface.co/datasets/bertin-project/alpaca-spanish)
- [Alpaca LoRA](https://github.com/tloen/alpaca-lora)
- [ChatGPT](https://openai.com/blog/chatgpt)
- [Hugging Face](https://huggingface.co/)
## Hardware Requirements
For training we have used an A6000 48GB VRAM Nvidia GPU. For eval, you can use a T4. |
NbAiLab/nb-alpaca-lora-7b | NbAiLab | 2023-09-19T11:32:00Z | 10 | 5 | peft | [
"peft",
"safetensors",
"text-generation",
"no",
"nb",
"dataset:NbAiLab/norwegian-alpaca",
"license:openrail",
"region:us"
]
| text-generation | 2023-03-27T11:28:50Z | ---
language:
- 'no'
- nb
license: openrail
library_name: peft
datasets:
- NbAiLab/norwegian-alpaca
pipeline_tag: text-generation
base_model: decapoda-research/llama-7b-hf
---
# NB-Alpaca-LoRA 7B
This is an Norwegian adapter generated by fine-tuning LLaMA-7B on a [Norwegian Alpaca](https://huggingface.co/datasets/NbAiLab/norwegian-alpaca) dataset.
## Usage
```python
from peft import PeftModel
from transformers import LlamaTokenizer, LlamaForCausalLM
base_model = "decapoda-research/llama-7b-hf"
tokenizer = LlamaTokenizer.from_pretrained(base_model)
model = LlamaForCausalLM.from_pretrained(
base_model,
load_in_8bit=True,
device_map="auto",
)
model = PeftModel.from_pretrained(model, "NbAiLab/nb-alpaca-lora-7b")
```
For generation, the promtp still needs the English template:
```python
from transformers import pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
instruction = "Skriv en e-post der du ønsker velkommen til en ny medarbeider ved navn Svein"
pipe(f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
""")
# Kjære Svein,
#
# Velkommen til vårt team! Vi er så glade for å ha deg med oss. Vi ser frem til å hjelpe deg med å nå dine mål og oppnå dine drømmer.
#
# Vi er alltid tilgjengelige hvis du har noen spørsmål eller ønsker å diskutere noen av våre prosjekter.
#
# Vi ser frem til å jobbe sammen med deg!
#
# Med vennlig
```
## Data
The dataset is a translation to Norwegian Bokmål of [alpaca_data_cleaned.json](https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_cleaned.json) (a clean version of the [Alpaca dataset made at Stanford](https://huggingface.co/datasets/tatsu-lab/alpaca)) using OpenAI's `gpt-3.5-turbo` model. We translated using a full-sample prompt instead of per strings, which resulted in more coherent tuples of `(instruction, input, output)` and costed around $60.0.
**This dataset cannot be used to create models that compete in any way with OpenAI.**
## Finetuning
To fine-tune the LLaMA model we used the code available on [Alpaca Lora](https://github.com/tloen/alpaca-lora), which provides code to finetune the LLaMA model using PEFT from Hugging Face. We run finetuning for 3 epochs using sequence length of 512 with no gradient accumulation on a single A6000 with 48GB VRAM for 12 hours on top of LLaMA-7B.
## Example outputs
Stanford Alpaca:
```
Instruction: Kom med en kreativ unnskyldning for å si at jeg ikke trenger å gå på festen.
Response: I'm sorry, but I don't need to go to the party.
```
NB-Alpaca-LoRA-7B:
```
Instruction: Kom med en kreativ unnskyldning for å si at jeg ikke trenger å gå på festen.
Response: Jeg beklager at jeg ikke kan gå på festen, men jeg tror at det er bedre at jeg ikke gjør det.
```
[NB-GTP-J-6B-Alpaca](https://huggingface.co/NbAiLab/nb-gpt-j-6B-alpaca):
```
Instruction: Kom med en kreativ unnskyldning for å si at jeg ikke trenger å gå på festen.
Response: Jeg beklager, men jeg har en avtale jeg ikke kan avlyse.
```
Stanford Alpaca:
```
Instruction: Hvordan kan jeg starte en karriere som dataforsker? Skriv i listeform.
Response:
Starting a career as a data scientist requires a combination of education, experience, and skills.
1. Obtain a Bachelor's or Master's degree in computer science, statistics, mathematics, or a related field.
2. Gain practical experience through internships or part-time jobs.
3. Learn programming languages such as Python, R, and SQL.
4. Develop strong analytical and problem-solving skills.
5. Build a portfolio of projects to showcase your work.
6. Network with other data scientists to learn
```
NB-Alpaca-LoRA-7B:
```
Instruction: Hvordan kan jeg starte en karriere som dataforsker? Skriv i listeform.
Response:
1. Søk etter en universitetsgrad i datavitenskap, matematikk, informatikk eller en relatert fag.
2. Søk etter en jobb som dataforsker eller dataanalytiker.
3. Utvikle dine ferdigheter i datavitenskap, matematikk, informatikk eller en relatert fag.
4. Utvikle dine ferdigheter innen dataanalyse.
```
[NB-GTP-J-6B-Alpaca](https://huggingface.co/NbAiLab/nb-gpt-j-6B-alpaca):
```
Instruction: Hvordan kan jeg starte en karriere som dataforsker? Skriv i listeform.
Response:
1. Lær programmeringsspråk som Python og Java.
2. Skaff deg en grad i informatikk, datavitenskap eller relaterte felt.
3. Bli sertifisert innen datamaskinsyn, maskinlæring eller andre relaterte felt.
4. Bygg nettverk med andre dataforskere.
5. Delta på konferanser og kurs for å holde deg oppdatert på de siste utviklingene innen feltet.
```
You can test it using the eval notebook [here](https://colab.research.google.com/github/22-hours/cabrita/blob/main/notebooks/cabrita-lora.ipynb).
## References
- [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Norwegian Alpaca](https://huggingface.co/datasets/NbAiLab/norwegian-alpaca)
- [Alpaca LoRA](https://github.com/tloen/alpaca-lora)
- [ChatGPT](https://openai.com/blog/chatgpt)
- [Hugging Face](https://huggingface.co/)
## Hardware Requirements
For training we have used an A6000 48GB VRAM Nvidia GPU. For eval, you can use a T4. |
radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram | radiogroup-crits | 2023-09-19T11:21:43Z | 89 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"it",
"mozilla-foundation/common_voice_8_0",
"speech",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-29T08:31:46Z | ---
language:
- it
license: apache-2.0
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
- it
- mozilla-foundation/common_voice_8_0
- speech
- wav2vec2
model-index:
- name: XLS-R Wav2Vec2 Italian by radiogroup crits
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0 italian
type: mozilla-foundation/common_voice_8_0
args: it
metrics:
- name: Test WER
type: wer
value: 9.04
- name: Test CER
type: cer
value: 2.2
- name: Test WER (+LM)
type: wer
value: 6.24
- name: Test CER (+LM)
type: cer
value: 1.67
---
# XLS-R-1B-ITALIAN-DOC4LM-5GRAM
## Fine-tuned XLS-R 1B model for speech recognition in Italian
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on Italian using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [Multilingual TEDx](http://www.openslr.org/100), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).
When using this model, make sure that your speech input is sampled at 16kHz.
## Language model information
Our language model was generated using a dataset of Italian wikipedia articles and manual transcriptions of radio newspapers and television programs.
## Download CommonVoice8.0 dataset for italian language
```python
from datasets import load_dataset
dataset = load_dataset("mozilla-foundation/common_voice_8_0", "it", use_auth_token=True)
```
## Evaluation Commands
To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`:
```bash
python eval.py --model_id radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram --dataset mozilla-foundation/common_voice_8_0 --config it --split test --log_outputs --greedy
mv log_mozilla-foundation_common_voice_8_0_it_test_predictions.txt log_mozilla-foundation_common_voice_8_0_it_test_predictions_greedy.txt
mv log_mozilla-foundation_common_voice_8_0_it_test_targets.txt log_mozilla-foundation_common_voice_8_0_it_test_targets_greedy.txt
mv mozilla-foundation_common_voice_8_0_it_test_eval_results.txt mozilla-foundation_common_voice_8_0_it_test_eval_results_greedy.txt
python eval.py --model_id radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram --dataset mozilla-foundation/common_voice_8_0 --config it --split test --log_outputs
mv log_mozilla-foundation_common_voice_8_0_it_test_predictions.txt log_mozilla-foundation_common_voice_8_0_it_test_predictions_lm.txt
mv log_mozilla-foundation_common_voice_8_0_it_test_targets.txt log_mozilla-foundation_common_voice_8_0_it_test_targets_lm.txt
mv mozilla-foundation_common_voice_8_0_it_test_eval_results.txt mozilla-foundation_common_voice_8_0_it_test_eval_results_lm.txt
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{crits2022wav2vec2-xls-r-1b-italian-doc4lm-5gram,
title={XLS-R Wav2Vec2 Italian by radiogroup crits},
author={Teraoni Prioletti Raffaele, Casagranda Paolo and Russo Francesco},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram}},
year={2022}
}
``` |
omarelsayeed/retriever | omarelsayeed | 2023-09-19T11:19:00Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-09-18T19:12:49Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 24000 with parameters:
```
{'batch_size': 50, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
0sunfire0/Llama_7B_Test08 | 0sunfire0 | 2023-09-19T11:17:26Z | 2 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-04T10:28:18Z | ---
library_name: peft
base_model: decapoda-research/llama-7b-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
SHENMU007/neunit_BASE_V9.5.13 | SHENMU007 | 2023-09-19T11:14:38Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-to-audio | 2023-09-19T09:49:48Z | ---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
EladAssia/ppo-LunarLander-v2 | EladAssia | 2023-09-19T11:14:04Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-19T11:13:42Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 286.92 +/- 22.27
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AhmedBou/Falcon_7B_Science_Exam_QLoRA | AhmedBou | 2023-09-19T11:12:13Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-19T11:12:10Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
dbecker1/test_lora_mdl3 | dbecker1 | 2023-09-19T11:08:35Z | 1 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-19T10:30:03Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-xl-base-1.0
dataset: None
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - dbecker1/test_lora_mdl3
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the None dataset. You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Phoenix10062002/llama2-faq-chatbot | Phoenix10062002 | 2023-09-19T11:06:33Z | 5 | 0 | peft | [
"peft",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
]
| null | 2023-08-04T14:36:15Z | ---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
Sarmila/pubmed-bert-mlm-squad-covidqa | Sarmila | 2023-09-19T11:05:04Z | 122 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"base_model:Sarmila/pubmed-bert-mlm",
"base_model:finetune:Sarmila/pubmed-bert-mlm",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-19T07:18:33Z | ---
license: mit
base_model: Sarmila/pubmed-bert-mlm
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: pubmed-bert-mlm-squad-covidqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pubmed-bert-mlm-squad-covidqa
This model is a fine-tuned version of [Sarmila/pubmed-bert-mlm](https://huggingface.co/Sarmila/pubmed-bert-mlm) on the covid_qa_deepset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 51 | 0.5056 |
| No log | 2.0 | 102 | 0.5423 |
| No log | 3.0 | 153 | 0.5812 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
royokong/prompteol-opt-1.3b | royokong | 2023-09-19T11:02:36Z | 4 | 1 | peft | [
"peft",
"base_model:facebook/opt-1.3b",
"base_model:adapter:facebook/opt-1.3b",
"region:us"
]
| null | 2023-07-27T14:47:42Z | ---
library_name: peft
base_model: facebook/opt-1.3b
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
legacy107/flan-t5-large-bottleneck-adapter-cpgQA | legacy107 | 2023-09-19T10:53:57Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-08-31T13:29:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: flan-t5-large-bottleneck-adapter-cpgQA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-bottleneck-adapter-cpgQA
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1175
- Squad: {'exact_match': 74.03846153846153, 'f1': 92.73025873728763}
- Bleu: {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Squad | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 0.4902 | 0.23 | 100 | 0.1695 | {'exact_match': 59.61538461538461, 'f1': 88.39664262292322} | {'bleu': 0.8611708764560243, 'precisions': [0.8791469194312796, 0.8657487091222031, 0.8552631578947368, 0.8448979591836735], 'brevity_penalty': 1.0, 'length_ratio': 1.0284321689683185, 'translation_length': 1266, 'reference_length': 1231} |
| 0.3577 | 0.45 | 200 | 0.3243 | {'exact_match': 47.11538461538461, 'f1': 75.97696037540817} | {'bleu': 0.44597697779640594, 'precisions': [0.9202211690363349, 0.9087779690189329, 0.8994360902255639, 0.8948979591836734], 'brevity_penalty': 0.49236704919459706, 'length_ratio': 0.5852981969486823, 'translation_length': 1266, 'reference_length': 2163} |
| 0.2751 | 0.68 | 300 | 0.1577 | {'exact_match': 69.23076923076923, 'f1': 89.48763228957931} | {'bleu': 0.8601252797928449, 'precisions': [0.8925750394944708, 0.878657487091222, 0.8656015037593985, 0.8561224489795919], 'brevity_penalty': 0.985104158338853, 'length_ratio': 0.9852140077821012, 'translation_length': 1266, 'reference_length': 1285} |
| 0.5794 | 0.9 | 400 | 0.4970 | {'exact_match': 32.69230769230769, 'f1': 67.89210636760458} | {'bleu': 0.5849757239612657, 'precisions': [0.7282780410742496, 0.693631669535284, 0.6635338345864662, 0.6387755102040816], 'brevity_penalty': 0.8599604506941122, 'length_ratio': 0.8689087165408373, 'translation_length': 1266, 'reference_length': 1457} |
| 0.2114 | 1.13 | 500 | 0.1245 | {'exact_match': 67.3076923076923, 'f1': 89.96309177836906} | {'bleu': 0.8997821698527838, 'precisions': [0.9360189573459715, 0.9285714285714286, 0.9238721804511278, 0.9204081632653062], 'brevity_penalty': 0.9704302027764995, 'length_ratio': 0.9708588957055214, 'translation_length': 1266, 'reference_length': 1304} |
| 0.1765 | 1.36 | 600 | 0.1214 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.1822 | 1.58 | 700 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.14 | 1.81 | 800 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.1456 | 2.04 | 900 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.1172 | 2.26 | 1000 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.1376 | 2.49 | 1100 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.1683 | 2.71 | 1200 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.0717 | 2.94 | 1300 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.1038 | 3.17 | 1400 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.0812 | 3.39 | 1500 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.1887 | 3.62 | 1600 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.0824 | 3.85 | 1700 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.1046 | 4.07 | 1800 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.0952 | 4.3 | 1900 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.1054 | 4.52 | 2000 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.1603 | 4.75 | 2100 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.1643 | 4.98 | 2200 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.1326 | 5.2 | 2300 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.1922 | 5.43 | 2400 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.1154 | 5.66 | 2500 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
| 0.07 | 5.88 | 2600 | 0.1175 | {'exact_match': 74.03846153846153, 'f1': 92.73025873728763} | {'bleu': 0.9331748310720637, 'precisions': [0.9447077409162717, 0.9380378657487092, 0.9332706766917294, 0.9285714285714286], 'brevity_penalty': 0.9968454284876576, 'length_ratio': 0.9968503937007874, 'translation_length': 1266, 'reference_length': 1270} |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Luciano/lora-4bit-Llama-2-13b-hf-lener_br | Luciano | 2023-09-19T10:53:54Z | 6 | 0 | peft | [
"peft",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
]
| null | 2023-08-20T11:16:06Z | ---
library_name: peft
base_model: meta-llama/Llama-2-13b-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
Luciano/Llama-2-7b-chat-hf-dolly-mini | Luciano | 2023-09-19T10:52:32Z | 3 | 0 | peft | [
"peft",
"pytorch",
"llama",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
]
| null | 2023-08-29T11:10:04Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
Luciano/lora-4bit-Llama-2-7b-hf-lener_br | Luciano | 2023-09-19T10:52:13Z | 55 | 0 | peft | [
"peft",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
]
| null | 2023-08-06T11:45:27Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
mhenrichsen/context-aware-splitter-1b | mhenrichsen | 2023-09-19T10:45:24Z | 182 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"da",
"dataset:mhenrichsen/context-aware-splits",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-18T08:26:42Z | ---
license: apache-2.0
datasets:
- mhenrichsen/context-aware-splits
language:
- da
---
# Context Aware Splitter
7b model available [here](https://huggingface.co/mhenrichsen/context-aware-splitter-7b).
CAS is a text splitter for Retrieval Augmented Generation.
It's trained on 12.3k danish texts with a token count of 13.4m.
## What does it do?
CAS takes a text (str), reads and understands the contexts and then provides the best splits based on a defined word count.
It returns a dict with the keys:
- splits: list[str]
- topic: str
## Code example
```python
from transformers import AutoTokenizer, TextStreamer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("mhenrichsen/context-aware-splitter-1b")
tokenizer = AutoTokenizer.from_pretrained("mhenrichsen/context-aware-splitter-1b")
streamer = TextStreamer(tokenizer, skip_special_tokens=True)
WORD_SPLIT_COUNT = 50
prompt_template = """### Instruction:
Din opgave er at segmentere en given tekst i separate dele, så hver del giver mening og kan læses uafhængigt af de andre. Hvis det giver mening, må der kan være et overlap mellem delene. Hver del skal ideelt indeholde {word_count} ord.
### Input:
{text}
### Response:
"""
artikel = """Kina er stærkt utilfreds med, at Tysklands udenrigsminister, Annalena Baerbock, har omtalt den kinesiske præsident Xi Jinping som en diktator.
- Bemærkningerne fra Tyskland er ekstremt absurde, krænker Kinas politiske værdighed alvorligt og er en åben politisk provokation, udtalte talsperson fra det kinesiske udenrigsministerium Mao Ning i går ifølge CNN.
Bemærkningen fra udenrigsminister Annalena Baerbock faldt i et interview om krigen i Ukraine med Fox News i sidste uge.
- Hvis Putin skulle vinde denne krig, hvilket signal ville det så sende til andre diktatorer i verden, som Xi, som den kinesiske præsident?, sagde hun.
Tysklands ambassadør i Kina, Patricia Flor, har som konsekvens af udtalelsen været til en kammeratlig samtale, oplyser det tyske udenrigsministerium til CNN."""
tokens = tokenizer(
prompt_template.format(text=artikel, word_count=WORD_SPLIT_COUNT),
return_tensors='pt'
)['input_ids']
# Generate output
generation_output = model.generate(
tokens,
streamer=streamer,
max_length = 8194,
eos_token_id = 29913
)
```
Example:
```
### Instruction:
Din opgave er at segmentere en given tekst i separate dele, så hver del giver mening og kan læses uafhængigt af de andre. Hvis det giver mening, må der kan være et overlap mellem delene. Hver del skal ideelt indeholde 50 ord.
### Input:
Munkebjerg er et overvejende middelklassekvarter beliggende i det centrale Odense Munkebjerg grænser op til Hunderup i vest, hvor det afgrænses af Hjallesevej, og byens centrum i nord. Kvarteret har status som et familievenligt boligkvarter med både lejligheder (i området omkring H.C Andersensgade) og parcelhuse som på og omkring Munkebjergvej og Munkebjergskolen. Socialdemokratiet står traditionelt set stærkt i området, som det også ses på resultaterne af stemmer afgivet ved valgstedet Munkebjergskolen fra folketingsvalget i 2011, hvor partiet fik 24,8% af stemmerne. Dog vinder partiet Venstre samt Det Radikale Venstre også bred opbakning i kvarteret med henholdsvis 20,7 og 12,6% af stemmerne ligeledes fra valget i 2011. De fleste af kvarterets børn går på den lokale Munkebjergskolen, mens enkelte går på Odense Friskole og/eller Giersings Realskole. Munkebjergkvarteret er desuden hjemsted for fodboldklubben OKS. Munkebjergkvarteret kaldes i dagligtale for "Munken".
### Response:
```
This returns the following dictionary:
```
{'splits': ['Munkebjerg er et overvejende middelklassekvarter beliggende i det centrale Odense. Munkebjerg grænser op til Hunderup i vest, hvor det afgrænses af Hjallesevej, og byens centrum i nord. Kvarteret har status som et familievenligt boligkvarter med både lejligheder (i området omkring H.C Andersensgade) og parcelhuse som på og omkring Munkebjergvej og Munkebjergskolen.', 'Socialdemokratiet står traditionelt set stærkt i området, som det også ses på resultaterne af stemmer afgivet ved valgstedet Munkebjergskolen fra folketingsvalget i 2011, hvor partiet fik 24,8% af stemmerne. Dog vinder partiet Venstre samt Det Radikale Venstre også bred opbakning i kvarteret med henholdsvis 20,7 og 12,6% af stemmerne ligeledes fra valget i 2011.', "De fleste af kvarterets børn går på den lokale Munkebjergskolen, mens enkelte går på Odense Friskole og/eller Giersings Realskole. Munkebjergkvarteret er desuden hjemsted for fodboldklubben OKS. Munkebjergkvarteret kaldes i dagligtale for 'Munken'."], 'topic': 'Beskrivelse af Munkebjergkvarteret i Odense.'}
```
## Prompt format
The model follows alpaca format.
```
### Instruction:
Din opgave er at segmentere en given tekst i separate dele, så hver del giver mening og kan læses uafhængigt af de andre. Hvis det giver mening, må der kan være et overlap mellem delene. Hver del skal ideelt indeholde {WORD_COUNT} ord.
### Input:
{TEXT}
### Response:
```
|
monsterapi/Gptj-6b_alpaca-gpt4 | monsterapi | 2023-09-19T10:45:10Z | 14 | 0 | peft | [
"peft",
"gptj-6b",
"instruct",
"instruct-alpaca",
"alpaca",
"gpt4",
"dataset:vicgalle/alpaca-gpt4",
"base_model:EleutherAI/gpt-j-6b",
"base_model:adapter:EleutherAI/gpt-j-6b",
"region:us"
]
| null | 2023-06-28T06:44:13Z | ---
library_name: peft
tags:
- gptj-6b
- instruct
- instruct-alpaca
- alpaca
- gpt4
datasets:
- vicgalle/alpaca-gpt4
base_model: EleutherAI/gpt-j-6b
---
We finetuned gptj-6b on Code-Alpaca-Instruct Dataset (vicgalle/alpaca-gpt4) for 10 epochs or ~ 50,000 steps using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
This dataset is vicgalle/alpaca-gpt4 unfiltered,
The finetuning session got completed in 7 hours and costed us only `$25` for the entire finetuning run!
#### Hyperparameters & Run details:
- Model Path: vicgalle/alpaca-gpt4
- Dataset: vicgalle/alpaca-gpt4
- Learning rate: 0.0003
- Number of epochs: 5
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1
---
license: apache-2.0
--- |
monsterapi/llama2-7b-tiny-codes-code-generation | monsterapi | 2023-09-19T10:43:52Z | 4 | 1 | peft | [
"peft",
"llama2",
"llama2-7b",
"code generation",
"code-generation",
"code",
"instruct",
"instruct-code",
"code-alpaca",
"alpaca-instruct",
"alpaca",
"llama7b",
"gpt2",
"dataset:nampdn-ai/tiny-codes",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:apache-2.0",
"region:us"
]
| null | 2023-08-16T14:46:07Z | ---
license: apache-2.0
library_name: peft
tags:
- llama2
- llama2-7b
- code generation
- code-generation
- code
- instruct
- instruct-code
- code-alpaca
- alpaca-instruct
- alpaca
- llama7b
- gpt2
datasets:
- nampdn-ai/tiny-codes
base_model: meta-llama/Llama-2-7b-hf
---
## Training procedure
We finetuned [Llama 2 7B model](https://huggingface.co/meta-llama/Llama-2-7b-hf) from Meta on [nampdn-ai/tiny-codes](https://huggingface.co/datasets/nampdn-ai/tiny-codes) for ~ 10,000 steps using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
This dataset contains **1.63 million rows** and is a collection of short and clear code snippets that can help LLM models learn how to reason with both natural and programming languages. The dataset covers a wide range of programming languages, such as Python, TypeScript, JavaScript, Ruby, Julia, Rust, C++, Bash, Java, C#, and Go. It also includes two database languages: Cypher (for graph databases) and SQL (for relational databases) in order to study the relationship of entities.
The finetuning session got completed in 193 minutes and costed us only ~ `$7.5` for the entire finetuning run!
#### Hyperparameters & Run details:
- Model Path: meta-llama/Llama-2-7b-hf
- Dataset: nampdn-ai/tiny-codes
- Learning rate: 0.0002
- Number of epochs: 1 (10k steps)
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1
### Framework versions
- PEFT 0.4.0
### Loss metrics:
 |
Carve/tracer_b7 | Carve | 2023-09-19T10:31:03Z | 0 | 12 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2022-09-01T15:43:56Z | ---
license: apache-2.0
---
`tracer-b7.pth` - Pretrained TRACER with EfficientNet v1 b7 encoder.
`tracer-b7-carveset-finetuned.pth` - The model of tracer b7, which has been finetuned on the CarveSet dataset. This model achieves an average F-Beta score of 96.2% on the test set. |
CyberHarem/ooishi_izumi_idolmastercinderellagirls | CyberHarem | 2023-09-19T10:25:51Z | 0 | 1 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/ooishi_izumi_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-19T10:08:09Z | ---
license: mit
datasets:
- CyberHarem/ooishi_izumi_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of ooishi_izumi_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5040, you need to download `5040/ooishi_izumi_idolmastercinderellagirls.pt` as the embedding and `5040/ooishi_izumi_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5040**, with the score of 0.949. The trigger words are:
1. `ooishi_izumi_idolmastercinderellagirls`
2. `long_hair, brown_eyes, blush, black_hair, breasts, smile, bangs, medium_breasts, hair_between_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-----------------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5400 | 0.929 | [Download](5400/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](5400/previews/pattern_2.png) |  |  | [<NSFW, click to see>](5400/previews/pattern_5.png) | [<NSFW, click to see>](5400/previews/pattern_6.png) | [<NSFW, click to see>](5400/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](5400/previews/pattern_11.png) | [<NSFW, click to see>](5400/previews/bikini.png) | [<NSFW, click to see>](5400/previews/bondage.png) | [<NSFW, click to see>](5400/previews/free.png) |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| **5040** | **0.949** | [**Download**](5040/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](5040/previews/pattern_2.png) |  |  | [<NSFW, click to see>](5040/previews/pattern_5.png) | [<NSFW, click to see>](5040/previews/pattern_6.png) | [<NSFW, click to see>](5040/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](5040/previews/pattern_11.png) | [<NSFW, click to see>](5040/previews/bikini.png) | [<NSFW, click to see>](5040/previews/bondage.png) | [<NSFW, click to see>](5040/previews/free.png) |  |  | [<NSFW, click to see>](5040/previews/nude.png) | [<NSFW, click to see>](5040/previews/nude2.png) |  |  |
| 4680 | 0.870 | [Download](4680/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4680/previews/pattern_2.png) |  |  | [<NSFW, click to see>](4680/previews/pattern_5.png) | [<NSFW, click to see>](4680/previews/pattern_6.png) | [<NSFW, click to see>](4680/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](4680/previews/pattern_11.png) | [<NSFW, click to see>](4680/previews/bikini.png) | [<NSFW, click to see>](4680/previews/bondage.png) | [<NSFW, click to see>](4680/previews/free.png) |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4320 | 0.866 | [Download](4320/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4320/previews/pattern_2.png) |  |  | [<NSFW, click to see>](4320/previews/pattern_5.png) | [<NSFW, click to see>](4320/previews/pattern_6.png) | [<NSFW, click to see>](4320/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](4320/previews/pattern_11.png) | [<NSFW, click to see>](4320/previews/bikini.png) | [<NSFW, click to see>](4320/previews/bondage.png) | [<NSFW, click to see>](4320/previews/free.png) |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3960 | 0.865 | [Download](3960/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3960/previews/pattern_2.png) |  |  | [<NSFW, click to see>](3960/previews/pattern_5.png) | [<NSFW, click to see>](3960/previews/pattern_6.png) | [<NSFW, click to see>](3960/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](3960/previews/pattern_11.png) | [<NSFW, click to see>](3960/previews/bikini.png) | [<NSFW, click to see>](3960/previews/bondage.png) | [<NSFW, click to see>](3960/previews/free.png) |  |  | [<NSFW, click to see>](3960/previews/nude.png) | [<NSFW, click to see>](3960/previews/nude2.png) |  |  |
| 3600 | 0.904 | [Download](3600/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3600/previews/pattern_2.png) |  |  | [<NSFW, click to see>](3600/previews/pattern_5.png) | [<NSFW, click to see>](3600/previews/pattern_6.png) | [<NSFW, click to see>](3600/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](3600/previews/pattern_11.png) | [<NSFW, click to see>](3600/previews/bikini.png) | [<NSFW, click to see>](3600/previews/bondage.png) | [<NSFW, click to see>](3600/previews/free.png) |  |  | [<NSFW, click to see>](3600/previews/nude.png) | [<NSFW, click to see>](3600/previews/nude2.png) |  |  |
| 3240 | 0.940 | [Download](3240/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3240/previews/pattern_2.png) |  |  | [<NSFW, click to see>](3240/previews/pattern_5.png) | [<NSFW, click to see>](3240/previews/pattern_6.png) | [<NSFW, click to see>](3240/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](3240/previews/pattern_11.png) | [<NSFW, click to see>](3240/previews/bikini.png) | [<NSFW, click to see>](3240/previews/bondage.png) | [<NSFW, click to see>](3240/previews/free.png) |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2880 | 0.903 | [Download](2880/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2880/previews/pattern_2.png) |  |  | [<NSFW, click to see>](2880/previews/pattern_5.png) | [<NSFW, click to see>](2880/previews/pattern_6.png) | [<NSFW, click to see>](2880/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](2880/previews/pattern_11.png) | [<NSFW, click to see>](2880/previews/bikini.png) | [<NSFW, click to see>](2880/previews/bondage.png) | [<NSFW, click to see>](2880/previews/free.png) |  |  | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) |  |  |
| 2520 | 0.922 | [Download](2520/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2520/previews/pattern_2.png) |  |  | [<NSFW, click to see>](2520/previews/pattern_5.png) | [<NSFW, click to see>](2520/previews/pattern_6.png) | [<NSFW, click to see>](2520/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](2520/previews/pattern_11.png) | [<NSFW, click to see>](2520/previews/bikini.png) | [<NSFW, click to see>](2520/previews/bondage.png) | [<NSFW, click to see>](2520/previews/free.png) |  |  | [<NSFW, click to see>](2520/previews/nude.png) | [<NSFW, click to see>](2520/previews/nude2.png) |  |  |
| 2160 | 0.854 | [Download](2160/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2160/previews/pattern_2.png) |  |  | [<NSFW, click to see>](2160/previews/pattern_5.png) | [<NSFW, click to see>](2160/previews/pattern_6.png) | [<NSFW, click to see>](2160/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](2160/previews/pattern_11.png) | [<NSFW, click to see>](2160/previews/bikini.png) | [<NSFW, click to see>](2160/previews/bondage.png) | [<NSFW, click to see>](2160/previews/free.png) |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1800 | 0.801 | [Download](1800/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1800/previews/pattern_2.png) |  |  | [<NSFW, click to see>](1800/previews/pattern_5.png) | [<NSFW, click to see>](1800/previews/pattern_6.png) | [<NSFW, click to see>](1800/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](1800/previews/pattern_11.png) | [<NSFW, click to see>](1800/previews/bikini.png) | [<NSFW, click to see>](1800/previews/bondage.png) | [<NSFW, click to see>](1800/previews/free.png) |  |  | [<NSFW, click to see>](1800/previews/nude.png) | [<NSFW, click to see>](1800/previews/nude2.png) |  |  |
| 1440 | 0.803 | [Download](1440/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1440/previews/pattern_2.png) |  |  | [<NSFW, click to see>](1440/previews/pattern_5.png) | [<NSFW, click to see>](1440/previews/pattern_6.png) | [<NSFW, click to see>](1440/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](1440/previews/pattern_11.png) | [<NSFW, click to see>](1440/previews/bikini.png) | [<NSFW, click to see>](1440/previews/bondage.png) | [<NSFW, click to see>](1440/previews/free.png) |  |  | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) |  |  |
| 1080 | 0.769 | [Download](1080/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1080/previews/pattern_2.png) |  |  | [<NSFW, click to see>](1080/previews/pattern_5.png) | [<NSFW, click to see>](1080/previews/pattern_6.png) | [<NSFW, click to see>](1080/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](1080/previews/pattern_11.png) | [<NSFW, click to see>](1080/previews/bikini.png) | [<NSFW, click to see>](1080/previews/bondage.png) | [<NSFW, click to see>](1080/previews/free.png) |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 720 | 0.541 | [Download](720/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](720/previews/pattern_2.png) |  |  | [<NSFW, click to see>](720/previews/pattern_5.png) | [<NSFW, click to see>](720/previews/pattern_6.png) | [<NSFW, click to see>](720/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](720/previews/pattern_11.png) | [<NSFW, click to see>](720/previews/bikini.png) | [<NSFW, click to see>](720/previews/bondage.png) | [<NSFW, click to see>](720/previews/free.png) |  |  | [<NSFW, click to see>](720/previews/nude.png) | [<NSFW, click to see>](720/previews/nude2.png) |  |  |
| 360 | 0.621 | [Download](360/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](360/previews/pattern_2.png) |  |  | [<NSFW, click to see>](360/previews/pattern_5.png) | [<NSFW, click to see>](360/previews/pattern_6.png) | [<NSFW, click to see>](360/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](360/previews/pattern_11.png) | [<NSFW, click to see>](360/previews/bikini.png) | [<NSFW, click to see>](360/previews/bondage.png) | [<NSFW, click to see>](360/previews/free.png) |  |  | [<NSFW, click to see>](360/previews/nude.png) | [<NSFW, click to see>](360/previews/nude2.png) |  |  |
|
trieudemo11/llama_7b_attrb_cate_4m_2 | trieudemo11 | 2023-09-19T10:25:07Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-19T10:24:53Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
royokong/prompteol-opt-2.7b | royokong | 2023-09-19T10:21:14Z | 390 | 0 | peft | [
"peft",
"base_model:facebook/opt-2.7b",
"base_model:adapter:facebook/opt-2.7b",
"region:us"
]
| null | 2023-07-27T15:02:56Z | ---
library_name: peft
base_model: facebook/opt-2.7b
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Wariano/bsc-bio-ehr-es-vih-juicio_anam_urgen | Wariano | 2023-09-19T10:13:27Z | 24 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T06:38:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bsc-bio-ehr-es-vih-juicio_anam_urgen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bsc-bio-ehr-es-vih-juicio_anam_urgen
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0364
- Positives Preds: 1040
- Negative Preds: 208738
- Positives Refs: 1961
- Negative Refs: 207817
- Tp: 826
- Fn: 1135
- Fp: 214
- Tn: 207603
- Accuracy: 0.9936
- Precision: 0.7942
- Recall: 0.4212
- F1: 0.5505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Positives Preds | Negative Preds | Positives Refs | Negative Refs | Tp | Fn | Fp | Tn | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:------:|:---------------:|:---------------:|:--------------:|:--------------:|:-------------:|:---:|:----:|:---:|:------:|:--------:|:---------:|:------:|:------:|
| 0.0372 | 1.0 | 26223 | 0.0358 | 1276 | 208502 | 1961 | 207817 | 888 | 1073 | 388 | 207429 | 0.9930 | 0.6959 | 0.4528 | 0.5487 |
| 0.04 | 2.0 | 52446 | 0.0364 | 1223 | 208555 | 1961 | 207817 | 873 | 1088 | 350 | 207467 | 0.9931 | 0.7138 | 0.4452 | 0.5484 |
| 0.037 | 3.0 | 78669 | 0.0362 | 1251 | 208527 | 1961 | 207817 | 870 | 1091 | 381 | 207436 | 0.9930 | 0.6954 | 0.4437 | 0.5417 |
| 0.0368 | 4.0 | 104892 | 0.0361 | 1125 | 208653 | 1961 | 207817 | 848 | 1113 | 277 | 207540 | 0.9934 | 0.7538 | 0.4324 | 0.5496 |
| 0.0367 | 5.0 | 131115 | 0.0364 | 1040 | 208738 | 1961 | 207817 | 826 | 1135 | 214 | 207603 | 0.9936 | 0.7942 | 0.4212 | 0.5505 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MattStammers/appo-Humanoid | MattStammers | 2023-09-19T10:11:16Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-19T10:11:11Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: mujoco_humanoid
type: mujoco_humanoid
metrics:
- type: mean_reward
value: 6743.15 +/- 2083.46
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **mujoco_humanoid** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r MattStammers/appo-Humanoid
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m sf_examples.mujoco.enjoy_mujoco --algo=APPO --env=mujoco_humanoid --train_dir=./train_dir --experiment=appo-Humanoid
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m sf_examples.mujoco.train_mujoco --algo=APPO --env=mujoco_humanoid --train_dir=./train_dir --experiment=appo-Humanoid --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
iashchak/ruGPT-3.5-13B-ggml | iashchak | 2023-09-19T10:05:25Z | 12 | 15 | transformers | [
"transformers",
"gpt2",
"ruGPT",
"GGML",
"NLP",
"Text Generation",
"ru",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-10T18:48:07Z | ---
language:
- ru
- en
tags:
- ruGPT
- GGML
- NLP
- Text Generation
license: "mit"
---
# ruGPT-3.5-13B Converted to GGML Format / ruGPT-3.5-13B Конвертированная в формат GGML
## Model Description / Описание модели
### English
This repository contains a GGML-formatted version of the [ruGPT-3.5-13B model](https://huggingface.co/ai-forever/ruGPT-3.5-13B) originally hosted on Hugging Face. The model has 13 billion parameters and was initially trained on a 300GB dataset from various domains. It was further fine-tuned on 100GB of code and legal documents. The model understands both Russian and English.
#### Dataset Details
- **Training Data**: 300GB from various domains
- **Fine-tuning Data**: 100GB of code and legal documents
- **Technical Specs**: Trained using Deepspeed and Megatron libraries on 300B tokens dataset for 3 epochs, around 45 days on 512 V100 GPUs. Fine-tuned for 1 epoch with a sequence length of 2048, around 20 days on 200 A100 GPUs.
- **Perplexity**: Around 8.8 for Russian language
#### Usage
##### For QuantizationType.Q4_0 and ContainerType.GGJT
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q4_0-ggjt.bin")
print(model.generate("The meaning of life is ").text)
```
##### For QuantizationType.Q4_0 and ContainerType.GGML
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q4_0.bin")
print(model.generate("The meaning of life is ").text)
```
##### For QuantizationType.Q4_1 and ContainerType.GGJT
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q4_1-ggjt.bin")
print(model.generate("The meaning of life is ").text)
```
##### For QuantizationType.Q4_1 and ContainerType.GGML
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q4_1.bin")
print(model.generate("The meaning of life is ").text)
```
##### For QuantizationType.Q5_0 and ContainerType.GGJT
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q5_0-ggjt.bin")
print(model.generate("The meaning of life is ").text)
```
##### For QuantizationType.Q5_0 and ContainerType.GGML
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q5_0.bin")
print(model.generate("The meaning of life is ").text)
```
##### For QuantizationType.Q5_1 and ContainerType.GGJT
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q5_1-ggjt.bin")
print(model.generate("The meaning of life is ").text)
```
##### For QuantizationType.Q5_1 and ContainerType.GGML
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q5_1.bin")
print(model.generate("The meaning of life is ").text)
```
##### For QuantizationType.Q8_0 and ContainerType.GGJT
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q8_0-ggjt.bin")
print(model.generate("The meaning of life is ").text)
```
##### For QuantizationType.Q8_0 and ContainerType.GGML
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q8_0.bin")
print(model.generate("The meaning of life is ").text)
```
##### f16 Version
```python
# f16 Version
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-f16.bin")
print(model.generate("Смысл жизни в ").text)
```
#### Compatibility
While this model is intended to be compatible with any GGML-compatible UI, it has not been extensively tested in such environments. Use at your own risk.
### Русский
Этот репозиторий содержит версию модели [ruGPT-3.5-13B](https://huggingface.co/ai-forever/ruGPT-3.5-13B) в формате GGML. Модель имеет 13 миллиардов параметров и изначально обучалась на 300ГБ данных из различных доменов. Далее она была дообучена на 100ГБ кода и юридических документов. Модель понимает как русский, так и английский языки.
#### Детали набора данных
- **Тренировочные данные**: 300ГБ из различных доменов
- **Данные для дообучения**: 100ГБ кода и юридических документов
- **Технические характеристики**: Обучена с использованием библиотек Deepspeed и Megatron на наборе данных из 300 миллиардов токенов за 3 эпохи, примерно 45 дней на 512 GPU V100. Дообучена 1 эпоху с длиной последовательности 2048, примерно 20 дней на 200 GPU A100.
- **Перплексия**: Около 8,8 для русского языка
#### Использование
##### Для QuantizationType.Q4_0 и ContainerType.GGJT
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q4_0-ggjt.bin")
print(model.generate("Смысл жизни в ").text)
```
##### Для QuantizationType.Q4_0 и ContainerType.GGML
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q4_0.bin")
print(model.generate("Смысл жизни в ").text)
```
##### Для QuantizationType.Q4_1 и ContainerType.GGJT
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q4_1-ggjt.bin")
print(model.generate("Смысл жизни в ").text)
```
##### Для QuantizationType.Q4_1 и ContainerType.GGML
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q4_1.bin")
print(model.generate("Смысл жизни в ").text)
```
##### Для QuantizationType.Q5_0 и ContainerType.GGJT
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q5_0-ggjt.bin")
print(model.generate("Смысл жизни в ").text)
```
##### Для QuantizationType.Q5_0 и ContainerType.GGML
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q5_0.bin")
print(model.generate("Смысл жизни в ").text)
```
##### Для QuantizationType.Q5_1 и ContainerType.GGJT
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q5_1-ggjt.bin")
print(model.generate("Смысл жизни в ").text)
```
##### Для QuantizationType.Q5_1 и ContainerType.GGML
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q5_1.bin")
print(model.generate("Смысл жизни в ").text)
```
##### Для QuantizationType.Q8_0 и ContainerType.GGJT
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q8_0-ggjt.bin")
print(model.generate("Смысл жизни в ").text)
```
##### Для QuantizationType.Q8_0 и ContainerType.GGML
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-q8_0.bin")
print(model.generate("Смысл жизни в ").text)
```
##### Версия f16
```python
from llm_rs import AutoModel
model = AutoModel.from_pretrained("iashchak/ruGPT-3.5-13B-ggml", model_file="ruGPT-3.5-13B-f16.bin")
print(model.generate("Смысл жизни в ").text)
```
#### Совместимость
Хотя эта модель предназначена для совместимости с любым GGML-совместимым интерфейсом, она не была тщательно протестирована в таких средах. Используйте на свой страх и риск.
|
davidkim205/komt-Llama-2-7b-chat-hf-ggml | davidkim205 | 2023-09-19T10:00:39Z | 0 | 6 | null | [
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"llama-2-chat",
"text-generation",
"en",
"ko",
"license:apache-2.0",
"region:us"
]
| text-generation | 2023-08-16T07:21:58Z | ---
language:
- en
- ko
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- llama-2-chat
license: apache-2.0
---
# komt : korean multi task instruction tuning model
Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities.
However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively.
This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs).
## Model Details
* **Model Developers** : davidkim(changyeon kim)
* **Repository** : https://github.com/davidkim205/komt
* **quant methods** : q4_0, q4_1, q5_0, q5_1, q2_k, q3_k, q3_k_m, q3_k_l, q4_k, q4_k_s, q4_k_m, q5_k, q5_k_s, q5_k_m, q8_0, q4_0 |
HumanCompatibleAI/sac-seals-Swimmer-v1 | HumanCompatibleAI | 2023-09-19T09:57:29Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Swimmer-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-19T09:56:38Z | ---
library_name: stable-baselines3
tags:
- seals/Swimmer-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Swimmer-v1
type: seals/Swimmer-v1
metrics:
- type: mean_reward
value: 28.90 +/- 1.67
name: mean_reward
verified: false
---
# **SAC** Agent playing **seals/Swimmer-v1**
This is a trained model of a **SAC** agent playing **seals/Swimmer-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env seals/Swimmer-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Swimmer-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo sac --env seals/Swimmer-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Swimmer-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo sac --env seals/Swimmer-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env seals/Swimmer-v1 -f logs/ -orga HumanCompatibleAI
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 100000),
('gamma', 0.995),
('learning_rate', 0.00039981805535514633),
('learning_starts', 1000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'log_std_init': -2.689958330139309,
'net_arch': [400, 300],
'use_sde': False}),
('tau', 0.01),
('train_freq', 256),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
HumanCompatibleAI/sac-seals-Ant-v1 | HumanCompatibleAI | 2023-09-19T09:54:44Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Ant-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-19T09:53:41Z | ---
library_name: stable-baselines3
tags:
- seals/Ant-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Ant-v1
type: seals/Ant-v1
metrics:
- type: mean_reward
value: 1004.15 +/- 26.60
name: mean_reward
verified: false
---
# **SAC** Agent playing **seals/Ant-v1**
This is a trained model of a **SAC** agent playing **seals/Ant-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env seals/Ant-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Ant-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo sac --env seals/Ant-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Ant-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo sac --env seals/Ant-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env seals/Ant-v1 -f logs/ -orga HumanCompatibleAI
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('buffer_size', 1000000),
('gamma', 0.98),
('learning_rate', 0.0018514039303149058),
('learning_starts', 1000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'log_std_init': -2.2692589009754176,
'net_arch': [256, 256],
'use_sde': False}),
('tau', 0.05),
('train_freq', 64),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
HumanCompatibleAI/sac-seals-Hopper-v1 | HumanCompatibleAI | 2023-09-19T09:52:21Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Hopper-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-19T09:51:24Z | ---
library_name: stable-baselines3
tags:
- seals/Hopper-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Hopper-v1
type: seals/Hopper-v1
metrics:
- type: mean_reward
value: 2279.30 +/- 124.09
name: mean_reward
verified: false
---
# **SAC** Agent playing **seals/Hopper-v1**
This is a trained model of a **SAC** agent playing **seals/Hopper-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env seals/Hopper-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Hopper-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo sac --env seals/Hopper-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Hopper-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo sac --env seals/Hopper-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env seals/Hopper-v1 -f logs/ -orga HumanCompatibleAI
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 100000),
('gamma', 0.98),
('learning_rate', 0.001709807687567946),
('learning_starts', 1000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'log_std_init': -1.6829391077276037,
'net_arch': [256, 256],
'use_sde': False}),
('tau', 0.08),
('train_freq', 32),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
EladAssia/LunarLander-v2 | EladAssia | 2023-09-19T09:48:54Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-19T09:48:34Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 296.39 +/- 17.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HumanCompatibleAI/ppo-seals-Humanoid-v1 | HumanCompatibleAI | 2023-09-19T09:47:36Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Humanoid-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-19T09:46:15Z | ---
library_name: stable-baselines3
tags:
- seals/Humanoid-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Humanoid-v1
type: seals/Humanoid-v1
metrics:
- type: mean_reward
value: 3224.12 +/- 925.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **seals/Humanoid-v1**
This is a trained model of a **PPO** agent playing **seals/Humanoid-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Humanoid-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Humanoid-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Humanoid-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Humanoid-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env seals/Humanoid-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/Humanoid-v1 -f logs/ -orga HumanCompatibleAI
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 0.2),
('ent_coef', 2.0745206045994986e-05),
('gae_lambda', 0.92),
('gamma', 0.999),
('learning_rate', 2.0309225666232827e-05),
('max_grad_norm', 0.5),
('n_envs', 1),
('n_epochs', 20),
('n_steps', 2048),
('n_timesteps', 10000000.0),
('normalize',
{'gamma': 0.999, 'norm_obs': False, 'norm_reward': True}),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'activation_fn': <class 'torch.nn.modules.activation.ReLU'>,
'features_extractor_class': <class 'imitation.policies.base.NormalizeFeaturesExtractor'>,
'net_arch': [{'pi': [256, 256], 'vf': [256, 256]}]}),
('vf_coef', 0.819262464558427),
('normalize_kwargs',
{'norm_obs': {'gamma': 0.999,
'norm_obs': False,
'norm_reward': True},
'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
HumanCompatibleAI/ppo-seals-Hopper-v1 | HumanCompatibleAI | 2023-09-19T09:45:56Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Hopper-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-19T09:44:39Z | ---
library_name: stable-baselines3
tags:
- seals/Hopper-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Hopper-v1
type: seals/Hopper-v1
metrics:
- type: mean_reward
value: 203.45 +/- 1.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **seals/Hopper-v1**
This is a trained model of a **PPO** agent playing **seals/Hopper-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Hopper-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Hopper-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Hopper-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Hopper-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env seals/Hopper-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/Hopper-v1 -f logs/ -orga HumanCompatibleAI
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('clip_range', 0.1),
('ent_coef', 0.0010159833764878474),
('gae_lambda', 0.98),
('gamma', 0.995),
('learning_rate', 0.0003904770450788824),
('max_grad_norm', 0.9),
('n_envs', 1),
('n_epochs', 20),
('n_steps', 2048),
('n_timesteps', 1000000.0),
('normalize',
{'gamma': 0.995, 'norm_obs': False, 'norm_reward': True}),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'activation_fn': <class 'torch.nn.modules.activation.ReLU'>,
'features_extractor_class': <class 'imitation.policies.base.NormalizeFeaturesExtractor'>,
'net_arch': [{'pi': [64, 64], 'vf': [64, 64]}]}),
('vf_coef', 0.20315938606555833),
('normalize_kwargs',
{'norm_obs': {'gamma': 0.995,
'norm_obs': False,
'norm_reward': True},
'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
HumanCompatibleAI/ppo-seals-Swimmer-v1 | HumanCompatibleAI | 2023-09-19T09:44:26Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Swimmer-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-19T09:43:45Z | ---
library_name: stable-baselines3
tags:
- seals/Swimmer-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Swimmer-v1
type: seals/Swimmer-v1
metrics:
- type: mean_reward
value: 292.84 +/- 3.69
name: mean_reward
verified: false
---
# **PPO** Agent playing **seals/Swimmer-v1**
This is a trained model of a **PPO** agent playing **seals/Swimmer-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Swimmer-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Swimmer-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Swimmer-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Swimmer-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env seals/Swimmer-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/Swimmer-v1 -f logs/ -orga HumanCompatibleAI
```
## Hyperparameters
```python
OrderedDict([('batch_size', 8),
('clip_range', 0.1),
('ent_coef', 5.167107294612664e-08),
('gae_lambda', 0.95),
('gamma', 0.999),
('learning_rate', 0.0001214437022727675),
('max_grad_norm', 2),
('n_epochs', 20),
('n_steps', 2048),
('n_timesteps', 1000000.0),
('normalize',
{'gamma': 0.999, 'norm_obs': False, 'norm_reward': True}),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'activation_fn': <class 'torch.nn.modules.activation.Tanh'>,
'features_extractor_class': <class 'imitation.policies.base.NormalizeFeaturesExtractor'>,
'net_arch': [{'pi': [64, 64], 'vf': [64, 64]}]}),
('vf_coef', 0.6162112311062333),
('normalize_kwargs',
{'norm_obs': {'gamma': 0.999,
'norm_obs': False,
'norm_reward': True},
'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
EladAssia/LunarLanderV2 | EladAssia | 2023-09-19T09:44:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-19T09:43:43Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.49 +/- 17.73
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HumanCompatibleAI/ppo-seals-Ant-v1 | HumanCompatibleAI | 2023-09-19T09:43:32Z | 49 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Ant-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-19T09:42:28Z | ---
library_name: stable-baselines3
tags:
- seals/Ant-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Ant-v1
type: seals/Ant-v1
metrics:
- type: mean_reward
value: 2461.22 +/- 674.80
name: mean_reward
verified: false
---
# **PPO** Agent playing **seals/Ant-v1**
This is a trained model of a **PPO** agent playing **seals/Ant-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Ant-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Ant-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Ant-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Ant-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env seals/Ant-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/Ant-v1 -f logs/ -orga HumanCompatibleAI
```
## Hyperparameters
```python
OrderedDict([('batch_size', 16),
('clip_range', 0.3),
('ent_coef', 3.1441389214159857e-06),
('gae_lambda', 0.8),
('gamma', 0.995),
('learning_rate', 0.00017959211641976886),
('max_grad_norm', 0.9),
('n_epochs', 10),
('n_steps', 2048),
('n_timesteps', 1000000.0),
('normalize',
{'gamma': 0.995, 'norm_obs': False, 'norm_reward': True}),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'activation_fn': <class 'torch.nn.modules.activation.Tanh'>,
'features_extractor_class': <class 'imitation.policies.base.NormalizeFeaturesExtractor'>,
'net_arch': [{'pi': [64, 64], 'vf': [64, 64]}]}),
('vf_coef', 0.4351450387648799),
('normalize_kwargs',
{'norm_obs': {'gamma': 0.995,
'norm_obs': False,
'norm_reward': True},
'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
HumanCompatibleAI/ppo-seals-CartPole-v0 | HumanCompatibleAI | 2023-09-19T09:41:41Z | 737 | 16 | stable-baselines3 | [
"stable-baselines3",
"seals/CartPole-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-12-29T13:39:32Z | ---
library_name: stable-baselines3
tags:
- seals/CartPole-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/CartPole-v0
type: seals/CartPole-v0
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **seals/CartPole-v0**
This is a trained model of a **PPO** agent playing **seals/CartPole-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/CartPole-v0 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/CartPole-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/CartPole-v0 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/CartPole-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env seals/CartPole-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/CartPole-v0 -f logs/ -orga HumanCompatibleAI
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 0.4),
('ent_coef', 0.008508727919228772),
('gae_lambda', 0.9),
('gamma', 0.9999),
('learning_rate', 0.0012403278189645594),
('max_grad_norm', 0.8),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 512),
('n_timesteps', 100000.0),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'activation_fn': <class 'torch.nn.modules.activation.ReLU'>,
'net_arch': [{'pi': [64, 64], 'vf': [64, 64]}]}),
('vf_coef', 0.489343896591493),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
HumanCompatibleAI/ppo-seals-MountainCar-v0 | HumanCompatibleAI | 2023-09-19T09:41:07Z | 19 | 1 | stable-baselines3 | [
"stable-baselines3",
"seals/MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-25T10:59:50Z | ---
library_name: stable-baselines3
tags:
- seals/MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/MountainCar-v0
type: seals/MountainCar-v0
metrics:
- type: mean_reward
value: -97.00 +/- 8.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **seals/MountainCar-v0**
This is a trained model of a **PPO** agent playing **seals/MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/MountainCar-v0 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/MountainCar-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/MountainCar-v0 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/MountainCar-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env seals/MountainCar-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/MountainCar-v0 -f logs/ -orga HumanCompatibleAI
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('clip_range', 0.2),
('ent_coef', 6.4940755116195606e-06),
('gae_lambda', 0.98),
('gamma', 0.99),
('learning_rate', 0.0004476103728105138),
('max_grad_norm', 1),
('n_envs', 16),
('n_epochs', 20),
('n_steps', 256),
('n_timesteps', 1000000.0),
('normalize',
{'gamma': 0.99, 'norm_obs': False, 'norm_reward': True}),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'activation_fn': <class 'torch.nn.modules.activation.Tanh'>,
'features_extractor_class': <class 'imitation.policies.base.NormalizeFeaturesExtractor'>,
'net_arch': [{'pi': [64, 64], 'vf': [64, 64]}]}),
('vf_coef', 0.25988158989488963),
('normalize_kwargs',
{'norm_obs': {'gamma': 0.99,
'norm_obs': False,
'norm_reward': True},
'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Vivekup/whisper-small | Vivekup | 2023-09-19T09:22:11Z | 76 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-19T09:12:17Z | ---
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Hi - Vivekup
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Vivekup
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.3
|
adhishezio/model | adhishezio | 2023-09-19T09:07:01Z | 29 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-19T08:02:47Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - adhishezio/model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
YaminiMahesh/llma2-7b-text-to-sql | YaminiMahesh | 2023-09-19T09:05:51Z | 21 | 1 | peft | [
"peft",
"region:us"
]
| null | 2023-09-19T08:13:38Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
bongo2112/sdxl-db-mwijaku-headshot | bongo2112 | 2023-09-19T09:05:29Z | 2 | 1 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-19T09:01:16Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of mwijakudc man
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
Leekp/toonmaker5 | Leekp | 2023-09-19T09:02:08Z | 1 | 2 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-19T09:02:00Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Korean webtoon image depicting a character named baek
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
dg845/unidiffuser-diffusers | dg845 | 2023-09-19T08:48:10Z | 13 | 1 | diffusers | [
"diffusers",
"text-to-image",
"image-to-text",
"image-captioning",
"image-variation",
"text-variation",
"multi-modality",
"generative model",
"arxiv:2303.06555",
"license:agpl-3.0",
"diffusers:UniDiffuserPipeline",
"region:us"
]
| text-to-image | 2023-04-25T05:02:36Z | ---
license: agpl-3.0
tags:
- text-to-image
- image-to-text
- image-captioning
- image-variation
- text-variation
- multi-modality
- generative model
---
This model is a version of the UniDiffuser-v1 ([original code](https://github.com/thu-ml/unidiffuser), [original model](https://huggingface.co/thu-ml/unidiffuser-v1)) checkpoint which is compatible with `diffusers`.
This is one of two models from the original UniDiffuser release, the other being [UniDiffuser-v0]().
From the original model card:
UniDiffuser is a unified diffusion framework to fit all distributions relevant to a set of multi-modal data in one transformer.
UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead.
Specifically, UniDiffuser employs a variation of transformer, called [U-ViT](https://github.com/baofff/U-ViT), which parameterizes the joint noise prediction network.
Other components perform as encoders and decoders of different modalities, including a pretrained image autoencoder from [Stable Diffusion](https://github.com/CompVis/stable-diffusion), a pretrained [image ViT-B/32 CLIP encoder](https://github.com/openai/CLIP), a pretrained [text ViT-L CLIP encoder](https://huggingface.co/openai/clip-vit-large-patch14), and a [GPT-2](https://github.com/openai/gpt-2) text decoder finetuned by ourselves.
We provide two versions of UniDiffuser:
- [UniDiffuser-v0](https://huggingface.co/thu-ml/unidiffuser-v0): This version is trained on [LAION-5B](https://laion.ai/), which contains noisy webdata of text-image pairs.
- [UniDiffuser-v1](https://huggingface.co/thu-ml/unidiffuser-v1): This version is resumed from UniDiffuser-v0, and is further trained with a set of less noisy internal text-image pairs. It uses a flag as its input to distinguish webdata and internal data during training.
## Example
```python
import requests
import torch
from PIL import Image
from io import BytesIO
from diffusers import UniDiffuserPipeline
device = "cuda"
model_id_or_path = "dg845/unidiffuser-diffusers"
pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path)
pipe.to(device)
# Joint image-text generation. The generation task is automatically inferred.
sample = pipe(num_inference_steps=20, guidance_scale=8.0)
image = sample.images[0]
text = sample.text[0]
image.save("unidiffuser_sample_joint_image.png")
print(text)
# The mode can be set manually. The following is equivalent to the above:
pipe.set_joint_mode()
sample2 = pipe(num_inference_steps=20, guidance_scale=8.0)
# Note that if you set the mode manually the pipeline will no longer attempt
# to automatically infer the mode. You can re-enable this with reset_mode().
pipe.reset_mode()
# Text-to-image generation.
prompt = "an elephant under the sea"
sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0)
t2i_image = sample.images[0]
t2i_image.save("unidiffuser_sample_text2img_image.png")
# Image-to-text generation.
image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg"
response = requests.get(image_url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((512, 512))
sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0)
i2t_text = sample.text[0]
print(text)
# Image variation can be performed with a image-to-text generation followed by a text-to-image generation:
sample = pipe(prompt=i2t_text, num_inference_steps=20, guidance_scale=8.0)
final_image = sample.images[0]
final_image.save("unidiffuser_image_variation_sample.png")
# Text variation can be performed with a text-to-image generation followed by a image-to-text generation:
sample = pipe(image=t2i_image, num_inference_steps=20, guidance_scale=8.0)
final_prompt = sample.text[0]
print(final_prompt)
```
## Model Details
- **Model type:** Diffusion-based multi-modal generation model
- **Language(s):** English
- **License:** agpl-3.0
- **Model Description:** This is a model that can perform image, text, text-to-image, image-to-text, and image-text pair generation. Its main component is a [U-ViT](https://github.com/baofff/U-ViT), which parameterizes the joint noise prediction network. Other components perform as encoders and decoders of different modalities, including a pretrained image autoencoder from [Stable Diffusion](https://github.com/CompVis/stable-diffusion), a pretrained [image ViT-B/32 CLIP encoder](https://github.com/openai/CLIP), a pretrained [text ViT-L CLIP encoder](https://huggingface.co/openai/clip-vit-large-patch14), and a [GPT-2](https://github.com/openai/gpt-2) text decoder finetuned by ourselves.
- **Resources for more information:** [GitHub Repository](https://github.com/thu-ml/unidiffuser), [Paper](https://arxiv.org/abs/2303.06555).
## Direct Use
_Note: Most of this section is taken from the [Stable Diffusion model card](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original), but applies in the same way to UniDiffuser_.
The model should be used following the agpl-3.0 license. Possible usage includes
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. |
takumi12/id2pg_pattern2_en_batchsize8_epoch12 | takumi12 | 2023-09-19T08:37:02Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-19T08:36:56Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
Sarmila/pubmed-bert-squad-covidqa | Sarmila | 2023-09-19T08:34:58Z | 73 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"biology",
"en",
"dataset:covid_qa_deepset",
"dataset:squad",
"base_model:Sarmila/pubmed-bert-squad-covidqa",
"base_model:finetune:Sarmila/pubmed-bert-squad-covidqa",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-17T05:49:24Z | ---
license: mit
base_model: Sarmila/pubmed-bert-squad-covidqa
tags:
- generated_from_trainer
- biology
datasets:
- covid_qa_deepset
- squad
model-index:
- name: pubmed-bert-squad-covidqa
results: []
language:
- en
pipeline_tag: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pubmed-bert-squad-covidqa
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the squad qa first, covid_qa_deepset dataset.
It achieves the following results on the evaluation set for squad:
{'exact_match': 59.0, 'f1': 76.32473929579194}
- Loss 1.003116
It achieves the following results on the evaluation set for covidqa:
- Loss: 0.4876
## Model description
This model is trained with an intention of testing pumed bert bionlp language model for question answering pipeline.
While testing on our custom dataset, we reliazed that the model when used directly for QA did not perform well at all. Hence, we decided to train on covidqa
to make model accustomed with answer extraction. While, covidqa data is very similar to what we intended to use, it is samll in number hence resulting not much improvement.
Therefore, we firt trained the model in squad dataset which is larger in number. Then, we trained the model for covid qa. Hence, squad helped model to learn how to extract answers and covid qa helped us to train the model on domain similar to ours i.e. biomedicine
further, we have first performed MLM using our dataset on pubmed bert bionlp and then performed exactly same üiüeline to see the difference which is [here]
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 51 | 0.4001 |
| No log | 2.0 | 102 | 0.4524 |
| No log | 3.0 | 153 | 0.4876 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3 |
CyberHarem/kiryuu_tsukasa_idolmastercinderellagirls | CyberHarem | 2023-09-19T08:28:08Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/kiryuu_tsukasa_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-19T08:12:41Z | ---
license: mit
datasets:
- CyberHarem/kiryuu_tsukasa_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of kiryuu_tsukasa_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3740, you need to download `3740/kiryuu_tsukasa_idolmastercinderellagirls.pt` as the embedding and `3740/kiryuu_tsukasa_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3740**, with the score of 0.993. The trigger words are:
1. `kiryuu_tsukasa_idolmastercinderellagirls`
2. `blonde_hair, purple_eyes, long_hair, jewelry, earrings, smile, bangs, breasts, blush`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.984 | [Download](5100/kiryuu_tsukasa_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.985 | [Download](4760/kiryuu_tsukasa_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.982 | [Download](4420/kiryuu_tsukasa_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.989 | [Download](4080/kiryuu_tsukasa_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| **3740** | **0.993** | [**Download**](3740/kiryuu_tsukasa_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.990 | [Download](3400/kiryuu_tsukasa_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.964 | [Download](3060/kiryuu_tsukasa_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.988 | [Download](2720/kiryuu_tsukasa_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.976 | [Download](2380/kiryuu_tsukasa_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.966 | [Download](2040/kiryuu_tsukasa_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.964 | [Download](1700/kiryuu_tsukasa_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.975 | [Download](1360/kiryuu_tsukasa_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.980 | [Download](1020/kiryuu_tsukasa_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.969 | [Download](680/kiryuu_tsukasa_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.852 | [Download](340/kiryuu_tsukasa_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
muvva/opus-mt-en-hi-finetuned-en-to-hi | muvva | 2023-09-19T08:19:55Z | 25 | 1 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-hi",
"base_model:finetune:Helsinki-NLP/opus-mt-en-hi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-12T10:39:45Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-hi
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-hi-finetuned-en-to-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-hi-finetuned-en-to-hi
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on the [hind_encorp](https://huggingface.co/datasets/hind_encorp) dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1807
- Bleu: 14.0103
- Gen Len: 24.1149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 3.3667 | 1.0 | 13695 | 3.1807 | 14.0103 | 24.1149 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
urbija/ner-bio-annotated-7 | urbija | 2023-09-19T08:19:00Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-19T06:55:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner-bio-annotated-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-bio-annotated-7
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3098
- Precision: 0.6877
- Recall: 0.7570
- F1: 0.7207
- Accuracy: 0.8997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 67 | 0.5051 | 0.4822 | 0.6199 | 0.5425 | 0.8298 |
| No log | 2.0 | 134 | 0.3376 | 0.6670 | 0.7173 | 0.6913 | 0.8889 |
| No log | 3.0 | 201 | 0.3098 | 0.6877 | 0.7570 | 0.7207 | 0.8997 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.3
|
dhmeltzer/Llama-2-13b-hf-eli5-wiki-1024_qlora_merged | dhmeltzer | 2023-09-19T08:14:05Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-16T18:36:39Z | trained for 3 epochs on ELI5 + simple wiki datasets |
Carve/cascadepsp | Carve | 2023-09-19T08:04:38Z | 0 | 3 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2023-01-26T15:19:48Z | ---
license: apache-2.0
---
Trained on MSRA-10K, DUT-OMRON, ECSSD and FSS-1000 datasets. This model is used to refine the segmentation mask after seg. network.
`cascadepsp_finetuned_carveset.pth` - Finetuned on CarveSet. |
CyberHarem/hanabata_nohkins_futokunoguild | CyberHarem | 2023-09-19T07:56:39Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/hanabata_nohkins_futokunoguild",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-19T07:37:00Z | ---
license: mit
datasets:
- CyberHarem/hanabata_nohkins_futokunoguild
pipeline_tag: text-to-image
tags:
- art
---
# Lora of hanabata_nohkins_futokunoguild
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5400, you need to download `5400/hanabata_nohkins_futokunoguild.pt` as the embedding and `5400/hanabata_nohkins_futokunoguild.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5400**, with the score of 0.920. The trigger words are:
1. `hanabata_nohkins_futokunoguild`
2. `pink_hair, short_hair, blush, green_eyes, open_mouth, breasts, large_breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8100 | 0.917 | [Download](8100/hanabata_nohkins_futokunoguild.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8100/previews/bikini.png) | [<NSFW, click to see>](8100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) |  |  |
| 7560 | 0.859 | [Download](7560/hanabata_nohkins_futokunoguild.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7560/previews/bikini.png) | [<NSFW, click to see>](7560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) |  |  |
| 7020 | 0.866 | [Download](7020/hanabata_nohkins_futokunoguild.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7020/previews/bikini.png) | [<NSFW, click to see>](7020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| 6480 | 0.920 | [Download](6480/hanabata_nohkins_futokunoguild.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6480/previews/bikini.png) | [<NSFW, click to see>](6480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) |  |  |
| 5940 | 0.912 | [Download](5940/hanabata_nohkins_futokunoguild.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/bikini.png) | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| **5400** | **0.920** | [**Download**](5400/hanabata_nohkins_futokunoguild.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/bikini.png) | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4860 | 0.905 | [Download](4860/hanabata_nohkins_futokunoguild.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4860/previews/bikini.png) | [<NSFW, click to see>](4860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) |  |  |
| 4320 | 0.889 | [Download](4320/hanabata_nohkins_futokunoguild.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bikini.png) | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3780 | 0.918 | [Download](3780/hanabata_nohkins_futokunoguild.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/bikini.png) | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| 3240 | 0.884 | [Download](3240/hanabata_nohkins_futokunoguild.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/bikini.png) | [<NSFW, click to see>](3240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2700 | 0.850 | [Download](2700/hanabata_nohkins_futokunoguild.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2700/previews/bikini.png) | [<NSFW, click to see>](2700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) |  |  |
| 2160 | 0.890 | [Download](2160/hanabata_nohkins_futokunoguild.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/bikini.png) | [<NSFW, click to see>](2160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1620 | 0.861 | [Download](1620/hanabata_nohkins_futokunoguild.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1620/previews/bikini.png) | [<NSFW, click to see>](1620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) |  |  |
| 1080 | 0.818 | [Download](1080/hanabata_nohkins_futokunoguild.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/bikini.png) | [<NSFW, click to see>](1080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 540 | 0.802 | [Download](540/hanabata_nohkins_futokunoguild.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](540/previews/bikini.png) | [<NSFW, click to see>](540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) |  |  |
|
takumi12/id2pg_pattern2_en_batchsize8_epoch30 | takumi12 | 2023-09-19T07:56:33Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-19T07:56:27Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
kuldeepsingh-in/kd-project-google-03 | kuldeepsingh-in | 2023-09-19T07:43:12Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-19T07:43:10Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a person kdsingh1009
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
shaowenchen/vicuna-7b-v1.5-16k-gguf | shaowenchen | 2023-09-19T07:42:42Z | 104 | 1 | null | [
"gguf",
"vicuna",
"chinese",
"text-generation",
"zh",
"en",
"license:other",
"region:us"
]
| text-generation | 2023-09-19T01:28:28Z | ---
inference: false
language:
- zh
- en
license: other
model_creator: lmsys
model_link: https://huggingface.co/lmsys/vicuna-7b-v1.5-16k
model_name: chinese-llama-2-7b-16k
model_type: vicuna
pipeline_tag: text-generation
quantized_by: shaowenchen
tasks:
- text2text-generation
tags:
- gguf
- vicuna
- chinese
---
## Provided files
| Name | Quant method | Size |
| ------------------------------ | ------------ | ------ |
| vicuna-7b-v1.5-16k.Q2_K.gguf | Q2_K | 2.6 GB |
| vicuna-7b-v1.5-16k.Q3_K.gguf | Q3_K | 3.1 GB |
| vicuna-7b-v1.5-16k.Q3_K_L.gguf | Q3_K_L | 3.3 GB |
| vicuna-7b-v1.5-16k.Q3_K_S.gguf | Q3_K_S | 2.7 GB |
| vicuna-7b-v1.5-16k.Q4_0.gguf | Q4_0 | 3.6 GB |
| vicuna-7b-v1.5-16k.Q4_1.gguf | Q4_1 | 3.9 GB |
| vicuna-7b-v1.5-16k.Q4_K.gguf | Q4_K | 3.8 GB |
| vicuna-7b-v1.5-16k.Q4_K_S.gguf | Q4_K_S | 3.6 GB |
| vicuna-7b-v1.5-16k.Q5_0.gguf | Q5_0 | 4.3 GB |
| vicuna-7b-v1.5-16k.Q5_1.gguf | Q5_1 | 4.7 GB |
| vicuna-7b-v1.5-16k.Q5_K.gguf | Q5_K | 4.5 GB |
| vicuna-7b-v1.5-16k.Q5_K_S.gguf | Q5_K_S | 4.3 GB |
| vicuna-7b-v1.5-16k.Q6_K.gguf | Q6_K | 5.1 GB |
| vicuna-7b-v1.5-16k.Q8_0.gguf | Q8_0 | 6.7 GB |
| vicuna-7b-v1.5-16k.gguf | full | 13 GB |
Usage:
```
docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/gguf-model-name.gguf hubimage/llama-cpp-python:latest
```
and you can view http://localhost:8000/docs to see the swagger UI.
## Provided images
| Name | Quant method | Compressed Size |
| ------------------------------------------ | ------------ | --------------- |
| `shaowenchen/vicuna-7b-v1.5-16k-gguf:Q2_K` | Q2_K | 2.88 GB |
| `shaowenchen/vicuna-7b-v1.5-16k-gguf:Q3_K` | Q3_K | 3.3 GB |
| `shaowenchen/vicuna-7b-v1.5-16k-gguf:Q4_K` | Q4_K | 4 GB |
Usage:
```
docker run --rm -p 8000:8000 shaowenchen/vicuna-7b-v1.5-16k-gguf:Q2_K
```
and you can view http://localhost:8000/docs to see the swagger UI.
|
filipealmeida/open-llama-3b-v2-pii-transform | filipealmeida | 2023-09-19T07:39:45Z | 137 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-31T02:27:12Z | ---
license: apache-2.0
widget:
- text: "### Instruction:\nMy name is Filipe and my phone number is 555-121-2234. How are you?\n### Response:\n"
example_title: "Example 1"
---
# Open Llama based PII anonymizer
## Description
This model, based on the `openlm-research/open_llama_3b_v2` architecture, is designed to automatically anonymize personal identifiable information (PII) from text data. Given a piece of text, the model can replace specific details such as names, addresses, dates, and other personal details with generic or randomized alternatives, thereby safeguarding the privacy of individuals while retaining the overall context of the text.
## Disclaimer
This model is an experiment and, while it strives to maintain privacy, it may not capture or anonymize all instances of PII in every context. Users should always review and verify the output, especially when dealing with sensitive data. |
albertengineer/lora-trained-xl-colab | albertengineer | 2023-09-19T07:39:25Z | 12 | 2 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2023-09-18T07:31:05Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - albertengineer/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
SalmonAI123/Question_answering_test | SalmonAI123 | 2023-09-19T07:31:25Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-15T10:04:15Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: Question_answering_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Question_answering_test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.4127 |
| 2.7467 | 2.0 | 500 | 1.9516 |
| 2.7467 | 3.0 | 750 | 1.8232 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
sakuraumi/touka_nukitashi | sakuraumi | 2023-09-19T07:30:34Z | 4 | 0 | transformers | [
"transformers",
"audio-to-audio",
"ja",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| audio-to-audio | 2023-09-19T07:00:46Z | ---
license: apache-2.0
language:
- ja
- zh
pipeline_tag: audio-to-audio
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<div align="center">
<h1>
冷泉院桐香 So-VITS模型
</h1>
</div>
# 模型详情
- 数据集:拔作岛1~2所有桐香语音,并进行筛选
- steps数:100k |
Laksitha/autotrain-tosdr_tldr_legal_summarisation_v1-1434353657 | Laksitha | 2023-09-19T07:28:04Z | 106 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"autotrain",
"summarization",
"unk",
"dataset:Laksitha/autotrain-data-tosdr_tldr_legal_summarisation_v1",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-09-11T23:56:10Z | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Laksitha/autotrain-data-tosdr_tldr_legal_summarisation_v1
co2_eq_emissions:
emissions: 2.9024601099439225
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1434353657
- CO2 Emissions (in grams): 2.9025
## Validation Metrics
- Loss: 2.821
- Rouge1: 32.961
- Rouge2: 10.761
- RougeL: 20.551
- RougeLsum: 30.094
- Gen Len: 92.222
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Laksitha/autotrain-tosdr_tldr_legal_summarisation_v1-1434353657
``` |
prince99/model11 | prince99 | 2023-09-19T07:22:51Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-19T07:22:42Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
Arabic-Clip-Archive/m-bert-base-ViT-B-32-trained-mclip-data | Arabic-Clip-Archive | 2023-09-19T07:17:57Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2023-09-19T06:21:14Z | The following checkpoint is for m-bert-base-ViT-B-32 trained on mclip dataset ( with try and execept) |
dgbuzzer/test-upload | dgbuzzer | 2023-09-19T07:16:47Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-19T07:07:44Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
ai-sherpa/llama-7b-test | ai-sherpa | 2023-09-19T07:14:18Z | 0 | 0 | peft | [
"peft",
"llama",
"region:us"
]
| null | 2023-09-18T06:02:00Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
checkiejan/prefix-paraphase-45-20-auto | checkiejan | 2023-09-19T07:13:19Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-19T07:13:17Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
jackaduma/Baichuan2-7B-Chat-8bits | jackaduma | 2023-09-19T07:09:16Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"zh",
"en",
"arxiv:1910.07467",
"arxiv:2009.03300",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2023-09-08T07:12:15Z | ---
license: mit
library_name: transformers
language:
- zh
- en
pipeline_tag: text-generation
inference: false
---
# Baichuan-7B
<!-- Provide a quick summary of what the model is/does. -->
Baichuan-7B是由百川智能开发的一个开源的大规模预训练模型。基于Transformer结构,在大约1.2万亿tokens上训练的70亿参数模型,支持中英双语,上下文窗口长度为4096。在标准的中文和英文权威benchmark(C-EVAL/MMLU)上均取得同尺寸最好的效果。
如果希望使用Baichuan-7B(如进行推理、Finetune等),我们推荐使用配套代码库[Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B)。
Baichuan-7B is an open-source large-scale pre-trained model developed by Baichuan Intelligent Technology. Based on the Transformer architecture, it is a model with 7 billion parameters trained on approximately 1.2 trillion tokens. It supports both Chinese and English, with a context window length of 4096. It achieves the best performance of its size on standard Chinese and English authoritative benchmarks (C-EVAL/MMLU).
If you wish to use Baichuan-7B (for inference, finetuning, etc.), we recommend using the accompanying code library [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B).
## Why use Baichuan-7B
- 在同尺寸模型中Baichuan-7B达到了目前SOTA的水平,参考下面MMLU指标
- Baichuan-7B使用自有的中英文双语语料进行训练,在中文上进行优化,在C-Eval达到SOTA水平
- 不同于LLaMA完全禁止商业使用,Baichuan-7B使用更宽松的开源协议,允许用于商业目的
- Among models of the same size, Baichuan-7B has achieved the current state-of-the-art (SOTA) level, as evidenced by the following MMLU metrics.
- Baichuan-7B is trained on proprietary bilingual Chinese-English corpora, optimized for Chinese, and achieves SOTA performance on C-Eval.
- Unlike LLaMA, which completely prohibits commercial use, Baichuan-7B employs a more lenient open-source license, allowing for commercial purposes.
## How to Get Started with the Model
inference code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation.utils import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("jackaduma/Baichuan2-7B-Chat-8bits", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("jackaduma/Baichuan2-7B-Chat-8bits", device_map="auto", trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained("jackaduma/Baichuan2-7B-Chat-8bits")
# non-streaming
messages = []
messages.append({"role": "user", "content": "解释一下“温故而知新”"})
response = model.chat(tokenizer, messages)
print(response)
# streaming
position = 0
for response in model.chat(tokenizer, messages, stream=True):
# print(response)
print(response[position:], end='', flush=True)
position = len(response)
```
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** 百川智能(Baichuan Intelligent Technology)
- **Email**: [email protected]
- **Language(s) (NLP):** Chinese/English
- **License:** [Baichuan-7B License](https://huggingface.co/baichuan-inc/Baichuan-7B/blob/main/baichuan-7B%20%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)
### Model Sources
<!-- Provide the basic links for the model. -->
整体模型基于标准的Transformer结构,我们采用了和LLaMA一样的模型设计
- **Position Embedding**:采用rotary-embedding,是现阶段被大多数模型采用的位置编码方案,具有很好的外推性。
- **Feedforward Layer**:采用SwiGLU,Feedforward变化为(8/3)倍的隐含层大小,即11008。
- **Layer Normalization**: 基于[RMSNorm](https://arxiv.org/abs/1910.07467)的Pre-Normalization。
具体参数和见下表
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 7000559616 |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 64000 |
| sequence length | 4096 |
The overall model is based on the standard Transformer structure, and we have adopted the same model design as LLaMA:
- Position Embedding: We use rotary-embedding, which is the position encoding scheme adopted by most models at this stage, and it has excellent extrapolation capabilities.
- Feedforward Layer: We use SwiGLU. The feedforward changes to (8/3) times the size of the hidden layer, that is, 11008.
- Layer Normalization: Pre-Normalization based on [RMSNorm](https://arxiv.org/abs/1910.07467).
The specific parameters are as follows:
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 7000559616 |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 64000 |
| sequence length | 4096 |
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
我们同时开源出了和本模型配套的训练代码,允许进行高效的Finetune用于下游任务,具体参见[Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B)。
We have also open-sourced the training code that accompanies this model, allowing for efficient finetuning for downstream tasks. For more details, please refer to [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
在没有充分评估风险和采取缓解措施的情况下投入生产使用;任何可能被视为不负责任或有害的使用案例。
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Baichuan-7B可能会产生事实上不正确的输出,不应依赖它产生事实上准确的信息。Baichuan-7B是在各种公共数据集上进行训练的。尽管我们已经做出了巨大的努力来清洗预训练数据,但这个模型可能会生成淫秽、偏见或其他冒犯性的输出。
Baichuan-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. Baichuan-7B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Training Details
训练具体设置参见[Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B)。
For specific training settings, please refer to [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B).
## Evaluation
### 中文评测
#### C-Eval
[CEval数据集](https://cevalbenchmark.com/index.html)是一个全面的中文基础模型评测数据集,涵盖了52个学科和四个难度的级别。我们使用该数据集的dev集作为few-shot的来源,在test集上进行了5-shot测试。
| Model 5-shot | Average | Avg(Hard) | STEM | Social Sciences | Humanities | Others |
|-----------------------------|---------|-----------|------|-----------------|------------|--------|
| GPT-4 | 68.7 | 54.9 | 67.1 | 77.6 | 64.5 | 67.8 |
| ChatGPT | 54.4 | 41.4 | 52.9 | 61.8 | 50.9 | 53.6 |
| Claude-v1.3 | 54.2 | 39.0 | 51.9 | 61.7 | 52.1 | 53.7 |
| Claude-instant-v1.0 | 45.9 | 35.5 | 43.1 | 53.8 | 44.2 | 45.4 |
| moss-moon-003-base (16B) | 27.4 | 24.5 | 27.0 | 29.1 | 27.2 | 26.9 |
| Ziya-LLaMA-13B-pretrain | 30.2 | 22.7 | 27.7 | 34.4 | 32.0 | 28.9 |
| LLaMA-7B-hf | 27.1 | 25.9 | 27.1 | 26.8 | 27.9 | 26.3 |
| ChatGLM-6B | 34.5 | 23.1 | 30.4 | 39.6 | 37.4 | 34.5 |
| Falcon-7B | 25.8 | 24.3 | 25.8 | 26.0 | 25.8 | 25.6 |
| Open-LLaMA-v2-pretrain (7B) | 24.0 | 22.5 | 23.1 | 25.3 | 25.2 | 23.2 |
| TigerBot-7B-base | 25.7 | 27.0 | 27.3 | 24.7 | 23.4 | 26.1 |
| Aquila-7B<sup>*</sup> | 25.5 | 25.2 | 25.6 | 24.6 | 25.2 | 26.6 |
| BLOOM-7B | 22.8 | 20.2 | 21.8 | 23.3 | 23.9 | 23.3 |
| BLOOMZ-7B | 35.7 | 25.8 | 31.3 | 43.5 | 36.6 | 35.6 |
| **Baichuan-7B** | 42.8 | 31.5 | 38.2 | 52.0 | 46.2 | 39.3 |
#### Gaokao
[Gaokao](https://github.com/ExpressAI/AI-Gaokao) 是一个以中国高考题作为评测大语言模型能力的数据集,用以评估模型的语言能力和逻辑推理能力。
我们只保留了其中的单项选择题,并对所有模型进行统一5-shot测试。
以下是测试的结果。
| Model | Average |
|-------------------------|-----------------|
| Open-LLaMA-v2-pretrain | 21.41 |
| Ziya-LLaMA-13B-pretrain | 23.17 |
| Falcon-7B | 23.98 |
| TigerBot-7B-base | 25.94 |
| LLaMA-7B | 27.81 |
| ChatGLM-6B | 21.41 |
| BLOOM-7B | 26.96 |
| BLOOMZ-7B | 28.72 |
| Aquila-7B<sup>*</sup> | 24.39 |
| **Baichuan-7B** | **36.24** |
#### AGIEval
[AGIEval](https://github.com/microsoft/AGIEval) 旨在评估模型的认知和解决问题相关的任务中的一般能力。
我们只保留了其中的四选一单项选择题,随机划分后对所有模型进行了统一5-shot测试。
| Model | Average |
|-------------------------|-----------------|
| Open-LLaMA-v2-pretrain | 23.49 |
| Ziya-LLaMA-13B-pretrain | 27.64 |
| Falcon-7B | 27.18 |
| TigerBot-7B-base | 25.19 |
| LLaMA-7B | 28.17 |
| ChatGLM-6B | 23.49 |
| BLOOM-7B | 26.55 |
| BLOOMZ-7B | 30.27 |
| Aquila-7B<sup>*</sup> | 25.58 |
| **Baichuan-7B** | **34.44** |
<sup>*</sup>其中Aquila模型来源于[智源官方网站](https://model.baai.ac.cn/model-detail/100098),仅做参考
### English Leaderboard
In addition to Chinese, we also tested the model's performance in English.
#### MMLU
[MMLU](https://arxiv.org/abs/2009.03300) is an English evaluation dataset that includes 57 multiple-choice tasks, covering elementary mathematics, American history, computer science, law, etc. The difficulty ranges from high school level to expert level, making it a mainstream LLM evaluation dataset.
We adopted the [open-source]((https://github.com/hendrycks/test)) evaluation scheme, and the final 5-shot results are as follows:
| Model | Humanities | Social Sciences | STEM | Other | Average |
|----------------------------------------|-----------:|:---------------:|:----:|:-----:|:-------:|
| LLaMA-7B<sup>2</sup> | 34.0 | 38.3 | 30.5 | 38.1 | 35.1 |
| Falcon-7B<sup>1</sup> | - | - | - | - | 35.0 |
| mpt-7B<sup>1</sup> | - | - | - | - | 35.6 |
| ChatGLM-6B<sup>0</sup> | 35.4 | 41.0 | 31.3 | 40.5 | 36.9 |
| BLOOM 7B<sup>0</sup> | 25.0 | 24.4 | 26.5 | 26.4 | 25.5 |
| BLOOMZ 7B<sup>0</sup> | 31.3 | 42.1 | 34.4 | 39.0 | 36.1 |
| moss-moon-003-base (16B)<sup>0</sup> | 24.2 | 22.8 | 22.4 | 24.4 | 23.6 |
| moss-moon-003-sft (16B)<sup>0</sup> | 30.5 | 33.8 | 29.3 | 34.4 | 31.9 |
| **Baichuan-7B<sup>0</sup>** | 38.4 | 48.9 | 35.6 | 48.1 | 42.3 |
The superscript in the Model column indicates the source of the results.
```
0:reimplemented
1:https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
2:https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu
```
## Our Group

|
BanUrsus/rl_course_vizdoom_health_gathering_supreme | BanUrsus | 2023-09-19T06:42:13Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-19T06:42:05Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 14.05 +/- 5.47
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r BanUrsus/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
lccllccc/textual_inversion_tangseng_sdxl_lora | lccllccc | 2023-09-19T06:37:33Z | 1 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-18T06:16:02Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-xl-base-1.0
dataset: None
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - lccllccc/textual_inversion_tangseng_sdxl_lora
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the None dataset. You can find some example images in the following.
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
teknium/Phi-Hermes-1.3B | teknium | 2023-09-19T06:29:14Z | 68 | 43 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mixformer-sequential",
"text-generation",
"custom_code",
"en",
"dataset:teknium/openhermes",
"license:other",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-09-13T19:12:40Z | ---
license: other
language:
- en
pipeline_tag: text-generation
datasets:
- teknium/openhermes
---
# Model Card for Puffin-Phi V2
Phi-1.5 fine tuned with Hermes Dataset
## Model Details
### Model Sources
This model was trained on the OpenHermes Dataset, made by me, which is over 240,000 mostly GPT-4 generated synthetic datapoints


## Uses
Let me know!
## How to Get Started with the Model
Phi does not support device_map "auto", and does not seem to want to inference in fp16, so use bf16.
Here is working code to inference, though it can be improved:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("teknium/Puffin-Phi-v2", trust_remote_code=True, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained("teknium/Puffin-Phi-v2", trust_remote_code=True, torch_dtype=torch.bfloat16)
inputs = tokenizer(f"### Instruction:\nWrite a negative review for the website, Twitter.\n### Response:\n", return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=128, do_sample=True, temperature=0.2, top_p=0.9, use_cache=True, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
The prompt format is Alpaca,
then is prompted like so:
```
### Instruction:
<prompt>
### Response:
```
## Training Details
### Training Procedure
Trained with Axolotl. View the wandb runs for all my puffin runs (this is puffin-phi-4 on wandb):
https://wandb.ai/teknium1/hermes-phi/runs/hermes-phi-1
## Evaluation

|
Solitary12138/Frozen-Lake | Solitary12138 | 2023-09-19T06:26:01Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-11T07:04:26Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Frozen-Lake
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Solitary12138/Frozen-Lake", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Meta1408/llama2-qlora-finetunined-french | Meta1408 | 2023-09-19T06:22:24Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-19T04:32:18Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
yongwah/llama-2-7b-yw | yongwah | 2023-09-19T06:16:56Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-15T00:21:29Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
Subsets and Splits