modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
hanasim/breeze-dsw-tiny-id | hanasim | 2024-01-13T18:14:29Z | 63 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"id",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-01-12T15:28:05Z | ---
language:
- id
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Breeze DSW Indonesian - tiny
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_16_0 id
type: mozilla-foundation/common_voice_16_0
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 43.44465912227436
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Breeze DSW Indonesian - tiny
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_16_0 id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7090
- Wer: 43.4447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.99 | 0.1 | 100 | 0.8486 | 54.2460 |
| 0.7896 | 1.04 | 200 | 0.7578 | 48.3899 |
| 0.4164 | 1.14 | 300 | 0.7388 | 49.3284 |
| 0.5456 | 2.09 | 400 | 0.7178 | 45.7954 |
| 0.4761 | 3.03 | 500 | 0.7109 | 45.2158 |
| 0.2674 | 3.13 | 600 | 0.7007 | 44.8431 |
| 0.3628 | 4.08 | 700 | 0.7026 | 44.2497 |
| 0.2565 | 5.02 | 800 | 0.7085 | 44.5073 |
| 0.2147 | 5.12 | 900 | 0.7090 | 43.4447 |
| 0.28 | 6.06 | 1000 | 0.7134 | 43.9553 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
ashishsr/Llama-2-7b-chat-hf-fine-tuned-adapters | ashishsr | 2024-01-13T18:06:06Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-01-13T18:06:03Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
hardikJ11/t5-small-finetuned-cnn-news | hardikJ11 | 2024-01-13T18:04:55Z | 97 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2024-01-13T17:50:25Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnn-news
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: validation
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 23.5402
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-news
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7209
- Rouge1: 23.5402
- Rouge2: 10.8834
- Rougel: 19.3936
- Rougelsum: 22.1513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00056
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.2531 | 1.0 | 718 | 2.6722 | 23.3437 | 10.5433 | 19.2183 | 21.8989 |
| 2.1518 | 2.0 | 1436 | 2.7024 | 23.4068 | 10.716 | 19.0751 | 21.9328 |
| 2.0925 | 3.0 | 2154 | 2.7235 | 23.232 | 10.5236 | 19.2254 | 21.8598 |
| 2.0808 | 4.0 | 2872 | 2.7309 | 23.7401 | 10.7664 | 19.4651 | 22.2479 |
| 2.1114 | 5.0 | 3590 | 2.7209 | 23.5402 | 10.8834 | 19.3936 | 22.1513 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
mus-shd/ppo-unit8-LunarLander-v2 | mus-shd | 2024-01-13T17:47:01Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-13T17:41:34Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -95.54 +/- 59.68
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 100000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'mus-shd/ppo-unit8-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
jysssacc/opt-1.3b_lora_627_lr5e-05_bs4_epoch5_wd0.01 | jysssacc | 2024-01-13T17:45:13Z | 1 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:facebook/opt-1.3b",
"base_model:adapter:facebook/opt-1.3b",
"license:other",
"region:us"
] | null | 2024-01-13T17:43:34Z | ---
license: other
library_name: peft
tags:
- generated_from_trainer
base_model: facebook/opt-1.3b
model-index:
- name: opt-1.3b_lora_627_lr5e-05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-1.3b_lora_627_lr5e-05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6301 | 1.0 | 157 | 3.0802 |
| 3.1031 | 2.0 | 314 | 2.9470 |
| 3.0145 | 3.0 | 471 | 2.9682 |
| 2.9372 | 4.0 | 628 | 2.9680 |
| 2.8663 | 5.0 | 785 | 2.9576 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0 |
abdulrehmanraja/urdu-news-1m-model | abdulrehmanraja | 2024-01-13T17:35:55Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:eslamxm/MBart-finetuned-ur-xlsum",
"base_model:finetune:eslamxm/MBart-finetuned-ur-xlsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-13T17:34:17Z | ---
base_model: eslamxm/MBart-finetuned-ur-xlsum
tags:
- generated_from_trainer
model-index:
- name: urdu-news-1m-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# urdu-news-1m-model
This model is a fine-tuned version of [eslamxm/MBart-finetuned-ur-xlsum](https://huggingface.co/eslamxm/MBart-finetuned-ur-xlsum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8643 | 0.5 | 500 | 0.8450 |
| 0.8055 | 1.0 | 1000 | 0.7392 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
cappittall/kralsakir_and_cappittall | cappittall | 2024-01-13T17:16:30Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-25T07:07:39Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### kral-cappittall Dreambooth model trained by cappittall with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:

|
v3ucn/Bert-vits2-Extra-Pretrained_models | v3ucn | 2024-01-13T17:00:56Z | 0 | 3 | null | [
"region:us"
] | null | 2024-01-07T03:24:44Z | Bert-vits2-Extra 中文特化底模 |
shobhit18/q-FrozenLake-v1-4x4-noSlippery | shobhit18 | 2024-01-13T16:47:36Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-12T20:29:04Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="shobhit18/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gflexx/q-FrozenLake-v1-4x4-noSlippery | gflexx | 2024-01-13T16:47:34Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-13T16:47:08Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="gflexx/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
cemy35/ArcModelcem | cemy35 | 2024-01-13T16:39:17Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] | text-to-image | 2024-01-13T16:10:27Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: 'make it modern architecture and realistic. '
output:
url: images/4be72a3b-4dbe-4521-a5f6-bf5db77ca5ba.jpg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# ArcmodelEXT
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/cemy35/ArcModelcem/tree/main) them in the Files & versions tab.
|
jysssacc/opt-1.3b_IA3_627_lr5e-05_bs4_epoch5_wd0.01 | jysssacc | 2024-01-13T16:39:13Z | 1 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:facebook/opt-1.3b",
"base_model:adapter:facebook/opt-1.3b",
"license:other",
"region:us"
] | null | 2024-01-13T16:37:53Z | ---
license: other
library_name: peft
tags:
- generated_from_trainer
base_model: facebook/opt-1.3b
model-index:
- name: opt-1.3b_IA3_627_lr5e-05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-1.3b_IA3_627_lr5e-05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.676 | 1.0 | 157 | 3.4596 |
| 3.6527 | 2.0 | 314 | 3.4477 |
| 3.6205 | 3.0 | 471 | 3.4289 |
| 3.6168 | 4.0 | 628 | 3.4106 |
| 3.6125 | 5.0 | 785 | 3.4045 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0 |
sumangpt/adapter_1 | sumangpt | 2024-01-13T16:26:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
] | null | 2024-01-13T16:26:23Z | ---
library_name: peft
base_model: tiiuae/falcon-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
amayprro552/customModelsFID | amayprro552 | 2024-01-13T16:17:19Z | 1 | 0 | diffusers | [
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-01-13T15:52:57Z | ---
license: creativeml-openrail-m
---
<b>This model is available on <a href="https://www.mage.space/">Mage.Space</a> (main sponsor)</b><br>
<b>Please read this!</b><br>
This is not yet the full version of the model (read the <b>"Model Description"</b> section).<br>
For version 6.0 it is recommended to use with VAE (to improve generation quality and get rid of artifacts): https://huggingface.co/stabilityai/sd-vae-ft-mse-original<br>
<b>Model Description</b><br>
Realistic Vision V6.0 "New Vision" is a global update for the Realistic Vision model, which will be released gradually in several beta versions until the full release. The model is aimed at realism and photorealism.<br>
CivitAI Page: https://civitai.com/models/4201/realistic-vision-v60-b1?modelVersionId=245598
<b>Resolutions (use lower resolution if you get a lot of mutations and stuff like that)</b><br>
- Face Portrait: 896x896<br>
- Portrait: 896x896, 768x1024<br>
- Half Body: 768x1024, 640x1152<br>
- Full Body: 896x896, 768x1024, 640x1152, 1024x768, 1152x640<br>
<b>Improvements</b>
- increased generation resolution to such resolutions as: 896x896, 768x1024, 640x1152, 1024x768, 1152x640. (note. in some cases there may still be mutations, duplications, etc -> will be fixed in future versions).<br>
- improved sfw and nsfw for female and female anatomy (note. not all poses work correctly in such large resolutions -> will be fixed in future versions).<br>
<b>Recommended Workflow</b><br>
Images can be generated with or without Hires.Fix, but it will help improve the generation quality significantly. In some cases it is strictly recommended to use Hires.Fix, namely when generating full body and half body images (note: you can also use Restore Faces or ADetailer).<br>
<b>Recommended Generation Parameters</b><br>
Sampler: DPM++ SDE Karras (25+ steps) / DPM++ 2M SDE (50+ steps)<br>
Negative Prompt: (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime), text, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck<br>
<b>Recommended Hires.Fix Parameters</b><br>
Sampler: DPM++ SDE Karras or DPM++ 2M SDE<br>
Denoising steps: 10+ (DPM++ SDE Karras) / 20+ (DPM++ 2M SDE (notice. the lower the value of hires steps at a given sampler, the stronger the skin texture and the higher the chance of getting artifacts))<br>
Denoising strength: 0.1-0.3<br>
Upscaler: 4x-UltraSharp / 4x_NMKD-Superscale-SP_178000_G or another<br>
Upscale by: 1.1-2.0+<br>
|
shobhit18/ppo-LunarLander-v2 | shobhit18 | 2024-01-13T16:05:35Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-11T19:14:53Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.78 +/- 17.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
snewcomer/phi-2-finetuned | snewcomer | 2024-01-13T16:03:59Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-01-10T15:47:27Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi-2-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-finetuned
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 600
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
exyl-drl-learn/ppo-SnowballTarget | exyl-drl-learn | 2024-01-13T15:57:17Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2024-01-13T15:57:09Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: exyl-drl-learn/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
amayprro552/customFIDIP | amayprro552 | 2024-01-13T15:51:22Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-01-13T15:51:22Z | ---
license: other
license_name: hmkp
license_link: LICENSE
---
|
nvidia/parakeet-ctc-1.1b | nvidia | 2024-01-13T15:44:22Z | 138,220 | 28 | nemo | [
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"FastConformer",
"Conformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"ctc",
"en",
"dataset:librispeech_asr",
"dataset:fisher_corpus",
"dataset:Switchboard-1",
"dataset:WSJ-0",
"dataset:WSJ-1",
"dataset:National-Singapore-Corpus-Part-1",
"dataset:National-Singapore-Corpus-Part-6",
"dataset:vctk",
"dataset:voxpopuli",
"dataset:europarl",
"dataset:multilingual_librispeech",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:MLCommons/peoples_speech",
"arxiv:2305.05084",
"license:cc-by-4.0",
"model-index",
"region:us"
] | automatic-speech-recognition | 2023-12-28T15:27:57Z | ---
language:
- en
library_name: nemo
datasets:
- librispeech_asr
- fisher_corpus
- Switchboard-1
- WSJ-0
- WSJ-1
- National-Singapore-Corpus-Part-1
- National-Singapore-Corpus-Part-6
- vctk
- voxpopuli
- europarl
- multilingual_librispeech
- mozilla-foundation/common_voice_8_0
- MLCommons/peoples_speech
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- FastConformer
- Conformer
- pytorch
- NeMo
- hf-asr-leaderboard
- ctc
license: cc-by-4.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: parakeet-ctc-1.1b
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: AMI (Meetings test)
type: edinburghcstr/ami
config: ihm
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 15.62
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Earnings-22
type: revdotcom/earnings22
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 13.69
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: GigaSpeech
type: speechcolab/gigaspeech
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 10.27
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 1.83
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.54
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: SPGI Speech
type: kensho/spgispeech
config: test
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 4.2
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: tedlium-v3
type: LIUM/tedlium
config: release1
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.54
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Vox Populi
type: facebook/voxpopuli
config: en
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 6.53
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 9.0
type: mozilla-foundation/common_voice_9_0
config: en
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 9.02
metrics:
- wer
pipeline_tag: automatic-speech-recognition
---
# Parakeet CTC 1.1B (en)
<style>
img {
display: inline;
}
</style>
[](#model-architecture)
| [](#model-architecture)
| [](#datasets)
`parakeet-ctc-1.1b` is an ASR model that transcribes speech in lower case English alphabet. This model is jointly developed by [NVIDIA NeMo](https://github.com/NVIDIA/NeMo) and [Suno.ai](https://www.suno.ai/) teams.
It is an XXL version of FastConformer CTC [1] (around 1.1B parameters) model.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) for complete architecture details.
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest PyTorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(model_name="nvidia/parakeet-ctc-1.1b")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/parakeet-ctc-1.1b"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 Hz mono-channel audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
FastConformer [1] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling. The model is trained using CTC loss. You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/fastconformer/fast-conformer_ctc_bpe.yaml).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
### Datasets
The model was trained on 64K hours of English speech collected and prepared by NVIDIA NeMo and Suno teams.
The training dataset consists of private subset with 40K hours of English speech plus 24K hours from the following public datasets:
- Librispeech 960 hours of English speech
- Fisher Corpus
- Switchboard-1 Dataset
- WSJ-0 and WSJ-1
- National Speech Corpus (Part 1, Part 6)
- VCTK
- VoxPopuli (EN)
- Europarl-ASR (EN)
- Multilingual Librispeech (MLS EN) - 2,000 hour subset
- Mozilla Common Voice (v7.0)
- People's Speech - 12,000 hour subset
## Performance
The performance of Automatic Speech Recognition models is measuring using Word Error Rate. Since this dataset is trained on multiple domains and a much larger corpus, it will generally perform better at transcribing audio in general.
The following tables summarizes the performance of the available models in this collection with the CTC decoder. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
|**Version**|**Tokenizer**|**Vocabulary Size**|**AMI**|**Earnings-22**|**Giga Speech**|**LS test-clean**|**SPGI Speech**|**TEDLIUM-v3**|**Vox Populi**|**Common Voice**|
|---------|-----------------------|-----------------|---------------|---------------|------------|-----------|-----|-------|------|------|
| 1.22.0 | SentencePiece Unigram | 1024 | 15.62 | 13.69 | 10.27 | 1.83 | 3.54 | 4.20 | 3.54 | 6.53 | 9.02 |
These are greedy WER numbers without external LM. More details on evaluation can be found at [HuggingFace ASR Leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard)
## NVIDIA Riva: Deployment
[NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support
Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva).
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
[1] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084)
[2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
[4] [Suno.ai](https://suno.ai/)
[5] [HuggingFace ASR Leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license. |
ntc-ai/SDXL-LoRA-slider.isometric-view | ntc-ai | 2024-01-13T15:22:32Z | 13 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2024-01-13T15:22:26Z |
---
language:
- en
thumbnail: "images/evaluate/isometric view.../isometric view_17_3.0.png"
widget:
- text: isometric view
output:
url: images/isometric view_17_3.0.png
- text: isometric view
output:
url: images/isometric view_19_3.0.png
- text: isometric view
output:
url: images/isometric view_20_3.0.png
- text: isometric view
output:
url: images/isometric view_21_3.0.png
- text: isometric view
output:
url: images/isometric view_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "isometric view"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - isometric view (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/isometric view_17_-3.0.png" width=256 height=256 /> | <img src="images/isometric view_17_0.0.png" width=256 height=256 /> | <img src="images/isometric view_17_3.0.png" width=256 height=256 /> |
| <img src="images/isometric view_19_-3.0.png" width=256 height=256 /> | <img src="images/isometric view_19_0.0.png" width=256 height=256 /> | <img src="images/isometric view_19_3.0.png" width=256 height=256 /> |
| <img src="images/isometric view_20_-3.0.png" width=256 height=256 /> | <img src="images/isometric view_20_0.0.png" width=256 height=256 /> | <img src="images/isometric view_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
isometric view
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.isometric-view', weight_name='isometric view.safetensors', adapter_name="isometric view")
# Activate the LoRA
pipe.set_adapters(["isometric view"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, isometric view"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1080+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
abragin/ppo-Huggy | abragin | 2024-01-13T15:21:18Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-01-13T15:21:06Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: abragin/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
HydraRahul/xyr-sundar-pichai | HydraRahul | 2024-01-13T15:17:13Z | 3 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-01-13T15:13:19Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### xyr-sundar-pichai Dreambooth model trained by HydraRahul following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 60448
Sample pictures of this concept:

|
DazMashaly/swin_cont | DazMashaly | 2024-01-13T15:14:30Z | 169 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-01-13T15:14:07Z | ---
tags:
- image-classification
- generated_from_trainer
datasets:
- image_folder
model-index:
- name: swin_cont
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin_cont
This model was trained from scratch on the zindi dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4766
- eval_accuracy: 0.7545
- eval_runtime: 236.8539
- eval_samples_per_second: 16.352
- eval_steps_per_second: 0.515
- epoch: 2.0
- step: 347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
zaq-hack/Noromaid-7B-0.4-DPO-bpw600-h6-exl2 | zaq-hack | 2024-01-13T15:09:19Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-13T15:03:55Z | ---
license: cc-by-nc-4.0
---

---
# This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)!
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains fp16 files of Noromaid-7b-v0.4-DPO.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- prompt-template start -->
## Prompt format: Chatml
```
<|im_start|>system
{sysprompt}<|im_end|>
<|im_start|>user
{input}<|im_end|>
<|im_start|>assistant
{output}<|im_end|>
```
## Training data used:
- [no_robots dataset](https://huggingface.co/Undi95/Llama2-13B-no_robots-alpaca-lora) let the model have more human behavior, enhances the output.
- [Aesir Private RP dataset] New data from a new and never used before dataset, add fresh data, no LimaRP spam, this is 100% new. Thanks to the [MinvervaAI Team](https://huggingface.co/MinervaAI) and, in particular, [Gryphe](https://huggingface.co/Gryphe) for letting us use it!
- [Another private Aesir dataset]
- [Another private Aesir dataset]
- [limarp](https://huggingface.co/datasets/lemonilia/LimaRP)
## DPO training data used:
- [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- [NobodyExistsOnTheInternet/ToxicDPOqa](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicDPOqa)
- [Undi95/toxic-dpo-v0.1-NoWarning](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-NoWarning)
This is a full finetune.
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek |
HydraRahul/xyr | HydraRahul | 2024-01-13T14:55:40Z | 0 | 0 | null | [
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-01-13T14:54:53Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### xyr Dreambooth model trained by HydraRahul following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 60448
Sample pictures of this concept:
|
MaziyarPanahi/OpenHermes-2-Mistral-7B-GPTQ | MaziyarPanahi | 2024-01-13T14:54:07Z | 77 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"quantized",
"4-bit",
"gptq",
"pytorch",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"en",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us",
"conversational",
"base_model:teknium/OpenHermes-2-Mistral-7B",
"base_model:finetune:teknium/OpenHermes-2-Mistral-7B"
] | text-generation | 2024-01-13T14:52:20Z | ---
license: apache-2.0
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- pytorch
- mistral
- text-generation
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- en
- base_model:mistralai/Mistral-7B-v0.1
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
model_name: OpenHermes-2-Mistral-7B-GPTQ
base_model: teknium/OpenHermes-2-Mistral-7B
inference: false
model_creator: teknium
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Description
[MaziyarPanahi/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co/MaziyarPanahi/OpenHermes-2-Mistral-7B-GPTQ) is a quantized (GPTQ) version of [teknium/OpenHermes-2-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2-Mistral-7B)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/OpenHermes-2-Mistral-7B-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
``` |
notaryanramani/my_awesome_billsum_model | notaryanramani | 2024-01-13T14:34:40Z | 90 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-13T14:29:12Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5045
- Rouge1: 0.1425
- Rouge2: 0.0544
- Rougel: 0.119
- Rougelsum: 0.119
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7998 | 0.1292 | 0.0372 | 0.1084 | 0.1089 | 19.0 |
| No log | 2.0 | 124 | 2.5835 | 0.1368 | 0.0492 | 0.1152 | 0.1151 | 19.0 |
| No log | 3.0 | 186 | 2.5213 | 0.143 | 0.0552 | 0.1198 | 0.1198 | 19.0 |
| No log | 4.0 | 248 | 2.5045 | 0.1425 | 0.0544 | 0.119 | 0.119 | 19.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Razafaheem/finetuning-sentiment-model-ophelia-3 | Razafaheem | 2024-01-13T14:33:23Z | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-13T13:52:40Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuning-sentiment-model-ophelia-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-ophelia-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6147
- Accuracy: 0.9141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Shijia/xlmroberta_clir_back_val_kin | Shijia | 2024-01-13T14:32:39Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-13T14:32:02Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlmroberta_clir_back_val_kin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmroberta_clir_back_val_kin
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0849
- Spearman Corr: 0.5144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 1.0 | 258 | 0.0623 | 0.5007 |
| 0.0408 | 2.0 | 516 | 0.1329 | 0.5051 |
| 0.0408 | 3.0 | 774 | 0.1045 | 0.5012 |
| 0.0197 | 4.0 | 1032 | 0.0849 | 0.5144 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
TeeZee/chronomaid-storytelling-13B-bpw8.0-h8-exl2 | TeeZee | 2024-01-13T14:28:55Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-13T12:58:54Z | ---
license: llama2
tags:
- merge
---
## **Chronomaid-Storytelling-13b**
[exllamav2](https://github.com/turboderp/exllamav2) quant for [NyxKrage/Chronomaid-Storytelling-13b](https://huggingface.co/NyxKrage/Chronomaid-Storytelling-13b)
Runs smoothly on single 3090 in webui with context length set to 4096, ExLlamav2_HF loader
and cache_8bit=True
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> |
zanetworker/quick-model | zanetworker | 2024-01-13T14:28:44Z | 46 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-13T12:19:49Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: quick-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# quick-model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.36.2
- TensorFlow 2.9.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
imagepipeline/EpiCRealism-Pure-Evo-v5 | imagepipeline | 2024-01-13T14:26:16Z | 47 | 1 | diffusers | [
"diffusers",
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-01-13T14:24:51Z | ---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## EpiCRealism-Pure-Evo-v5
<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/530ba359-f680-4704-aeb1-3fefb2cd632d/width=450/02503-1337.jpeg" alt="Generated by Image Pipeline" style="border-radius: 10px;">
**This checkpoint model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - How to use? Prompt: simple explanation of the image (try first without extra keywords) Negative: cartoon, painting, illustration, (worst quality, low quality, normal quality:2) Steps: >20 (if image has errors or artefacts use higher Steps) CFG Scale: 5 (higher config scale can lose realism, depends on prompt, sampler and Steps)Sampler: Any Sampler (SDE, DPM-Sampler will result in more realism) Size: 512x768 or 768x512
[](https://imagepipeline.io/models/EpiCRealism-Pure-Evo-v5?id=0b397287-3449-4801-9a86-536465f6189f/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "0b397287-3449-4801-9a86-536465f6189f",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "",
"lora_weights": ""
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
LarryAIDraw/FatinaV1_1 | LarryAIDraw | 2024-01-13T14:22:00Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-13T14:18:22Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/240699/fatina-or-tower-of-druaga |
imagepipeline/EpiCRealism-Natural-Sin | imagepipeline | 2024-01-13T14:21:10Z | 43 | 0 | diffusers | [
"diffusers",
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-01-13T14:19:46Z | ---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## EpiCRealism-Natural-Sin
<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/530ba359-f680-4704-aeb1-3fefb2cd632d/width=450/02503-1337.jpeg" alt="Generated by Image Pipeline" style="border-radius: 10px;">
**This checkpoint model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - How to use? Prompt: simple explanation of the image (try first without extra keywords) Negative: cartoon, painting, illustration, (worst quality, low quality, normal quality:2) Steps: >20 (if image has errors or artefacts use higher Steps) CFG Scale: 5 (higher config scale can lose realism, depends on prompt, sampler and Steps)Sampler: Any Sampler (SDE, DPM-Sampler will result in more realism) Size: 512x768 or 768x512
[](https://imagepipeline.io/models/EpiCRealism-Natural-Sin?id=86596b30-043c-4e13-9d03-6b0a1dd9f8ff/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "86596b30-043c-4e13-9d03-6b0a1dd9f8ff",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "",
"lora_weights": ""
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
LarryAIDraw/ImoutoSaeIrebaIi_KaniNayuta | LarryAIDraw | 2024-01-13T14:20:44Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-13T14:11:25Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/258708/kani-nayuta-or-imouto-sae-ireba-ii |
LarryAIDraw/jill-stingray-000005 | LarryAIDraw | 2024-01-13T14:14:12Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-13T14:05:10Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/263307/jill-stingray-va-11-hall-a |
LarryAIDraw/Sword_Maiden__Goblin_Slayer_ | LarryAIDraw | 2024-01-13T14:14:00Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-13T14:04:49Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/263294/sword-maiden-goblin-slayer |
LarryAIDraw/Yorha_HolyService2B-DEF | LarryAIDraw | 2024-01-13T14:13:26Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-13T14:03:08Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/238120?modelVersionId=296845 |
LarryAIDraw/LoRA__mhy_honkai_SR_kafka_v1_0 | LarryAIDraw | 2024-01-13T14:12:33Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-13T13:55:09Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/263508/kafkaoror-honkai-star-rail-lora |
MaziyarPanahi/komt-mistral-7b-v1-GPTQ | MaziyarPanahi | 2024-01-13T14:07:27Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"quantized",
"4-bit",
"gptq",
"pytorch",
"en",
"ko",
"arxiv:2308.06502",
"arxiv:2308.06259",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us",
"conversational",
"base_model:davidkim205/komt-mistral-7b-v1",
"base_model:finetune:davidkim205/komt-mistral-7b-v1",
"license:apache-2.0"
] | text-generation | 2024-01-13T14:05:30Z | ---
license: apache-2.0
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- pytorch
- mistral
- text-generation
- finetuned
- en
- ko
- arxiv:2308.06502
- arxiv:2308.06259
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
model_name: komt-mistral-7b-v1-GPTQ
base_model: davidkim205/komt-mistral-7b-v1
inference: false
model_creator: davidkim205
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Description
[MaziyarPanahi/komt-mistral-7b-v1-GPTQ](https://huggingface.co/MaziyarPanahi/komt-mistral-7b-v1-GPTQ) is a quantized (GPTQ) version of [davidkim205/komt-mistral-7b-v1](https://huggingface.co/davidkim205/komt-mistral-7b-v1)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/komt-mistral-7b-v1-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
``` |
rhndeveloper/orca-2-7B-v01-fine-tuned-using-ludwig-4bit | rhndeveloper | 2024-01-13T14:04:46Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Orca-2-7b",
"base_model:adapter:microsoft/Orca-2-7b",
"region:us"
] | null | 2024-01-13T14:04:40Z | ---
library_name: peft
base_model: microsoft/Orca-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
TheBloke/HamSter-0.1-GPTQ | TheBloke | 2024-01-13T14:03:46Z | 18 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"base_model:PotatoOff/HamSter-0.1",
"base_model:quantized:PotatoOff/HamSter-0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-01-13T13:39:40Z | ---
base_model: PotatoOff/HamSter-0.1
inference: false
language:
- en
license: apache-2.0
model_creator: Christian
model_name: Hamster 0.1
model_type: mistral
prompt_template: '[INST] {prompt} [/INST]
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Hamster 0.1 - GPTQ
- Model creator: [Christian](https://huggingface.co/PotatoOff)
- Original model: [Hamster 0.1](https://huggingface.co/PotatoOff/HamSter-0.1)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Christian's Hamster 0.1](https://huggingface.co/PotatoOff/HamSter-0.1).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/HamSter-0.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/HamSter-0.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/HamSter-0.1-GGUF)
* [Christian's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PotatoOff/HamSter-0.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Mistral
```
[INST] {prompt} [/INST]
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/HamSter-0.1-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/HamSter-0.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/HamSter-0.1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/HamSter-0.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/HamSter-0.1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/HamSter-0.1-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/HamSter-0.1-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/HamSter-0.1-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `HamSter-0.1-GPTQ`:
```shell
mkdir HamSter-0.1-GPTQ
huggingface-cli download TheBloke/HamSter-0.1-GPTQ --local-dir HamSter-0.1-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir HamSter-0.1-GPTQ
huggingface-cli download TheBloke/HamSter-0.1-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir HamSter-0.1-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir HamSter-0.1-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/HamSter-0.1-GPTQ --local-dir HamSter-0.1-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/HamSter-0.1-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/HamSter-0.1-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/HamSter-0.1-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `HamSter-0.1-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/HamSter-0.1-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''[INST] {prompt} [/INST]
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/HamSter-0.1-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''[INST] {prompt} [/INST]
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Christian's Hamster 0.1
# HamSter v0.1
A Fine tune model roleplay focused of [mistralai/Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-v0.2)...
With the help of [my team](https://huggingface.co/ConvexAI).
- For good performance i recommend you to use a detailled character card! Check out [Chub.ai](https://chub.ai) for some premade character card
- Uses Mistral prompt template with chat-instruct.`
|
Shijia/xlmroberta_clir_all_eng_val_kin | Shijia | 2024-01-13T13:54:58Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-13T13:54:19Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlmroberta_clir_all_eng_val_kin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmroberta_clir_all_eng_val_kin
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0931
- Spearman Corr: 0.4035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 1.0 | 258 | 0.0414 | 0.4662 |
| 0.039 | 2.0 | 516 | 0.0519 | 0.4431 |
| 0.039 | 3.0 | 774 | 0.0481 | 0.4216 |
| 0.0177 | 4.0 | 1032 | 0.0931 | 0.4035 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
akrisn/q-FrozenLake-v1-4x4-noSlippery | akrisn | 2024-01-13T13:53:01Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-13T13:52:40Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="akrisn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
gagan3012/Multirial | gagan3012 | 2024-01-13T13:52:19Z | 1,385 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"conversational",
"ar",
"en",
"fr",
"es",
"de",
"hi",
"id",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-12T22:32:13Z | ---
license: apache-2.0
tags:
- moe
- mixtral
language:
- ar
- en
- fr
- es
- de
- hi
- id
- zh
---
# Multirial
MultiRial is the first ever multilingual Mixture of experts model.
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [azale-ai/Starstreak-7b-beta](https://huggingface.co/azale-ai/Starstreak-7b-beta)
* [gagan3012/Mistral_arabic_dpo](https://huggingface.co/gagan3012/Mistral_arabic_dpo)
* [davidkim205/komt-mistral-7b-v1](https://huggingface.co/davidkim205/komt-mistral-7b-v1)
* [OpenBuddy/openbuddy-zephyr-7b-v14.1](https://huggingface.co/OpenBuddy/openbuddy-zephyr-7b-v14.1)
* [manishiitg/open-aditi-hi-v1](https://huggingface.co/manishiitg/open-aditi-hi-v1)
* [VAGOsolutions/SauerkrautLM-7b-v1-mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral)
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "gagan3012/Multirial"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
DenisTheDev/Hannah-PASSTHROUGH | DenisTheDev | 2024-01-13T13:51:05Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"openchat/openchat-3.5-1210",
"HuggingFaceH4/zephyr-7b-beta",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-13T13:45:47Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- openchat/openchat-3.5-1210
- HuggingFaceH4/zephyr-7b-beta
---
# Hannah-PASSTHROUGH
Hannah-PASSTHROUGH is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210)
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: openchat/openchat-3.5-1210
layer_range: [0, 32]
- sources:
- model: HuggingFaceH4/zephyr-7b-beta
layer_range: [24, 32]
merge_method: passthrough
dtype: bfloat16
``` |
A2H0H0R1/Qwen-7B-Chat-Int4-Qlora-rice-new-long | A2H0H0R1 | 2024-01-13T13:45:43Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-7B-Chat-Int4",
"base_model:adapter:Qwen/Qwen-7B-Chat-Int4",
"region:us"
] | null | 2024-01-13T13:29:32Z | ---
library_name: peft
base_model: Qwen/Qwen-7B-Chat-Int4
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
hxxris/haaris-transformer-weight-decay | hxxris | 2024-01-13T13:27:23Z | 146 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-01-13T11:45:09Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: haaris-transformer-weight-decay
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# haaris-transformer-weight-decay
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6394
- Accuracy: 0.0664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.73 | 2 | 2.6394 | 0.0664 |
| No log | 1.45 | 4 | 2.6394 | 0.0664 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
JDB03/PPO-SnowballTarget | JDB03 | 2024-01-13T13:27:05Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2024-01-13T13:25:51Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JDB03/PPO-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Dhanang/kategori_aspek_model | Dhanang | 2024-01-13T13:26:35Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indobenchmark/indobert-base-p2",
"base_model:finetune:indobenchmark/indobert-base-p2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-13T11:57:40Z | ---
license: mit
base_model: indobenchmark/indobert-base-p2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: kategori_aspek_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kategori_aspek_model
This model is a fine-tuned version of [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5731
- Accuracy: 0.7532
- F1: 0.7342
- Precision: 0.6791
- Recall: 0.8234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6662 | 1.0 | 1816 | 0.6854 | 0.7449 | 0.7139 | 0.6657 | 0.7857 |
| 0.4846 | 2.0 | 3632 | 0.5731 | 0.7532 | 0.7342 | 0.6791 | 0.8234 |
| 0.3135 | 3.0 | 5448 | 0.6906 | 0.7667 | 0.7431 | 0.7017 | 0.7994 |
| 0.2189 | 4.0 | 7264 | 0.8181 | 0.7755 | 0.7387 | 0.7065 | 0.7994 |
| 0.152 | 5.0 | 9080 | 0.9838 | 0.7893 | 0.7486 | 0.7290 | 0.7799 |
| 0.0938 | 6.0 | 10896 | 1.0601 | 0.7826 | 0.7598 | 0.7314 | 0.7957 |
| 0.0629 | 7.0 | 12712 | 1.3297 | 0.7868 | 0.7665 | 0.7673 | 0.7684 |
| 0.0423 | 8.0 | 14528 | 1.3356 | 0.7906 | 0.7639 | 0.7477 | 0.7875 |
| 0.0178 | 9.0 | 16344 | 1.5868 | 0.7887 | 0.7625 | 0.7656 | 0.7638 |
| 0.008 | 10.0 | 18160 | 1.5453 | 0.7928 | 0.7650 | 0.7621 | 0.7709 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
prashrex/llamamodel | prashrex | 2024-01-13T13:23:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-01-13T13:13:58Z | ---
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
AfnanTS/Mutilingual-DBpediaArabic | AfnanTS | 2024-01-13T13:20:02Z | 173 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-01-12T16:34:11Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Mutilingual-DBpediaArabic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mutilingual-DBpediaArabic
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.27.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.13.3
|
TheBloke/Helion-4x34B-GPTQ | TheBloke | 2024-01-13T13:12:24Z | 9 | 3 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"yi",
"moe",
"conversational",
"base_model:Weyaxi/Helion-4x34B",
"base_model:quantized:Weyaxi/Helion-4x34B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-01-13T05:24:38Z | ---
base_model: Weyaxi/Helion-4x34B
inference: false
license: other
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-license
model_creator: "Ya\u011F\u0131z \xC7al\u0131k"
model_name: Helion 4X34B
model_type: mixtral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- yi
- moe
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Helion 4X34B - GPTQ
- Model creator: [Yağız Çalık](https://huggingface.co/Weyaxi)
- Original model: [Helion 4X34B](https://huggingface.co/Weyaxi/Helion-4x34B)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Yağız Çalık's Helion 4X34B](https://huggingface.co/Weyaxi/Helion-4x34B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Helion-4x34B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Helion-4x34B-GGUF)
* [Yağız Çalık's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Weyaxi/Helion-4x34B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Helion-4x34B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 9.96 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Helion-4x34B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 9.93 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Helion-4x34B-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 44.21 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Helion-4x34B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 46.28 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Helion-4x34B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Helion-4x34B-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Helion-4x34B-GPTQ`:
```shell
mkdir Helion-4x34B-GPTQ
huggingface-cli download TheBloke/Helion-4x34B-GPTQ --local-dir Helion-4x34B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Helion-4x34B-GPTQ
huggingface-cli download TheBloke/Helion-4x34B-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Helion-4x34B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Helion-4x34B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Helion-4x34B-GPTQ --local-dir Helion-4x34B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Helion-4x34B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Helion-4x34B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Helion-4x34B-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Helion-4x34B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Helion-4x34B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Helion-4x34B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Yağız Çalık's Helion 4X34B

# Helion-4x34B
This is the model for Helion-4x34B. I used [mergekit](https://github.com/cg123/mergekit) to make this MOE model.
# Prompt Template(s):
Since [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) uses many prompt templates, you can utilize prompt templates provided by bagel and other expert's prompt templates.
**Note:** I currently do not know which prompt template is best.
### ChatML:
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
### Human Asistant
```
Human: {user}
### Assistant: {asistant}
```
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system}
{instruction}
### Response:
```
### Vicuna
```
{system}
USER: {instruction}
ASSISTANT:
```
Visit [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) to try more prompt templates.
# Yaml Config to reproduce
```yaml
base_model: nontoxic-bagel-34b-v0.2
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: bagel-dpo-34b-v0.2
positive_prompts: ["question answering", "Q:", science", "biology", "chemistry", "physics"]
negative_prompts: ["math", "reason", "mathematics", "solve", "count", "code", "python", "javascript", "programming", "algorithm"]
- source_model: Nous-Hermes-2-Yi-34B
positive_prompts: ["chat", "math", "reason", "mathematics", "solve", "count", "python", "javascript", "programming", "algorithm", "tell me", "assistant"]
- source_model: SUS-Chat-34B
positive_prompts: ["math", "reason", "mathematics", "solve", "count", "assistant"]
- source_model: platypus-yi-34b
positive_prompts: [""]
negative_prompts: ["math", "reason", "mathematics", "solve", "count"]
```
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/Helion-4x34B-GPTQ](https://huggingface.co/TheBloke/Helion-4x34B-GPTQ)
##### GGUF
- [TheBloke/Helion-4x34B-GGUF](https://huggingface.co/TheBloke/Helion-4x34B-GGUF)
##### AWQ
- [TheBloke/Helion-4x34B-AWQ](https://huggingface.co/TheBloke/Helion-4x34B-AWQ)
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
|
mihael974/speecht5_finetuned_voxpopuli_hr | mihael974 | 2024-01-13T13:10:25Z | 63 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-01-11T23:22:35Z | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_hr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_hr
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2343 | 25.64 | 1000 | 1.2588 |
| 1.2336 | 51.28 | 2000 | 1.2571 |
| 1.2338 | 76.92 | 3000 | 1.2542 |
| 1.2241 | 102.56 | 4000 | 1.2562 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
TheBloke/Helion-4x34B-GGUF | TheBloke | 2024-01-13T13:10:11Z | 33 | 3 | transformers | [
"transformers",
"gguf",
"mixtral",
"yi",
"moe",
"base_model:Weyaxi/Helion-4x34B",
"base_model:quantized:Weyaxi/Helion-4x34B",
"license:other",
"region:us",
"conversational"
] | null | 2024-01-13T05:24:38Z | ---
base_model: Weyaxi/Helion-4x34B
inference: false
license: other
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-license
model_creator: "Ya\u011F\u0131z \xC7al\u0131k"
model_name: Helion 4X34B
model_type: mixtral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- yi
- moe
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Helion 4X34B - GGUF
- Model creator: [Yağız Çalık](https://huggingface.co/Weyaxi)
- Original model: [Helion 4X34B](https://huggingface.co/Weyaxi/Helion-4x34B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Yağız Çalık's Helion 4X34B](https://huggingface.co/Weyaxi/Helion-4x34B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Helion-4x34B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Helion-4x34B-GGUF)
* [Yağız Çalık's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Weyaxi/Helion-4x34B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [helion-4x34b.Q2_K.gguf](https://huggingface.co/TheBloke/Helion-4x34B-GGUF/blob/main/helion-4x34b.Q2_K.gguf) | Q2_K | 2 | 41.47 GB| 43.97 GB | smallest, significant quality loss - not recommended for most purposes |
| helion-4x34b.Q3_K_M.gguf | Q3_K_M | 3 | 54.45 GB| 56.95 GB | very small, high quality loss |
| helion-4x34b.Q4_0.gguf | Q4_0 | 4 | 64.06 GB| 66.56 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| helion-4x34b.Q4_K_M.gguf | Q4_K_M | 4 | 68.66 GB| 71.16 GB | medium, balanced quality - recommended |
| helion-4x34b.Q5_0.gguf | Q5_0 | 5 | 78.21 GB| 80.71 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| helion-4x34b.Q5_K_M.gguf | Q5_K_M | 5 | 80.58 GB| 83.08 GB | large, very low quality loss - recommended |
| helion-4x34b.Q6_K.gguf | Q6_K | 6 | 93.25 GB| 95.75 GB | very large, extremely low quality loss |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `helion-4x34b.Q6_K.gguf-split-a`
* `helion-4x34b.Q6_K.gguf-split-b`
### q8_0
Please download:
* `helion-4x34b.Q8_0.gguf-split-a`
* `helion-4x34b.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat helion-4x34b.Q6_K.gguf-split-* > helion-4x34b.Q6_K.gguf && rm helion-4x34b.Q6_K.gguf-split-*
cat helion-4x34b.Q8_0.gguf-split-* > helion-4x34b.Q8_0.gguf && rm helion-4x34b.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B helion-4x34b.Q6_K.gguf-split-a + helion-4x34b.Q6_K.gguf-split-b helion-4x34b.Q6_K.gguf
del helion-4x34b.Q6_K.gguf-split-a helion-4x34b.Q6_K.gguf-split-b
COPY /B helion-4x34b.Q8_0.gguf-split-a + helion-4x34b.Q8_0.gguf-split-b helion-4x34b.Q8_0.gguf
del helion-4x34b.Q8_0.gguf-split-a helion-4x34b.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Helion-4x34B-GGUF and below it, a specific filename to download, such as: helion-4x34b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Helion-4x34B-GGUF helion-4x34b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Helion-4x34B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Helion-4x34B-GGUF helion-4x34b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m helion-4x34b.Q4_K_M.gguf --color -c 200000 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 200000` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./helion-4x34b.Q4_K_M.gguf", # Download the model file first
n_ctx=200000, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./helion-4x34b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Yağız Çalık's Helion 4X34B

# Helion-4x34B
This is the model for Helion-4x34B. I used [mergekit](https://github.com/cg123/mergekit) to make this MOE model.
# Prompt Template(s):
Since [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) uses many prompt templates, you can utilize prompt templates provided by bagel and other expert's prompt templates.
**Note:** I currently do not know which prompt template is best.
### ChatML:
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
### Human Asistant
```
Human: {user}
### Assistant: {asistant}
```
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system}
{instruction}
### Response:
```
### Vicuna
```
{system}
USER: {instruction}
ASSISTANT:
```
Visit [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) to try more prompt templates.
# Yaml Config to reproduce
```yaml
base_model: nontoxic-bagel-34b-v0.2
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: bagel-dpo-34b-v0.2
positive_prompts: ["question answering", "Q:", science", "biology", "chemistry", "physics"]
negative_prompts: ["math", "reason", "mathematics", "solve", "count", "code", "python", "javascript", "programming", "algorithm"]
- source_model: Nous-Hermes-2-Yi-34B
positive_prompts: ["chat", "math", "reason", "mathematics", "solve", "count", "python", "javascript", "programming", "algorithm", "tell me", "assistant"]
- source_model: SUS-Chat-34B
positive_prompts: ["math", "reason", "mathematics", "solve", "count", "assistant"]
- source_model: platypus-yi-34b
positive_prompts: [""]
negative_prompts: ["math", "reason", "mathematics", "solve", "count"]
```
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/Helion-4x34B-GPTQ](https://huggingface.co/TheBloke/Helion-4x34B-GPTQ)
##### GGUF
- [TheBloke/Helion-4x34B-GGUF](https://huggingface.co/TheBloke/Helion-4x34B-GGUF)
##### AWQ
- [TheBloke/Helion-4x34B-AWQ](https://huggingface.co/TheBloke/Helion-4x34B-AWQ)
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
<!-- original-model-card end -->
|
konapieces/VoidnoiseCore | konapieces | 2024-01-13T13:04:30Z | 0 | 3 | diffusers | [
"diffusers",
"art",
"artwork",
"girl",
"stable-diffusion",
"ja",
"en",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-05T07:34:54Z | ---
license: creativeml-openrail-m
language:
- ja
- en
library_name: diffusers
tags:
- art
- artwork
- girl
- stable-diffusion
---
# ▼ モデルの詳細 (Model Details)
<details>
<summary>VoidnoiseCore_F0636</summary>
<div>
# 本モデルの概要 (Overview of this model)
本モデルは<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" target="_blank">CreativeML OpenRAIL-M</a>ライセンスの訓練モデル、SD1.5追加学習モデル、またそれらのみを使用したマージモデルで製作したモデルになります。<br>
The model will be produced using only the CreativeML OpenRAIL-M licensed training model, the SD1.5 additional training model, and a merged model using only those models.<br>
リアリズム調を基軸に、人物はドール様に描かれるよう調整し、公開予定だったvoidnoiseMix系列を意識して製作されています。<br>
The work is based on a realistic style, with doll-like figures, and was created with the voidnoiseMix series, which was scheduled to be released to the public.<br>
クオリティ系のLoRAやLyCORISを使用せずとも、ある程度精細な表現ができるよう調整しています。<br>
Adjustments are made to achieve a certain degree of detailed expression without the use of quality systems LoRA or LyCORIS.<br>
# 推奨設定 (Recommended settings)
- Sampler: DPM++ 2M Karras
- Step: 35 - 40
- CFG scale: 10 - 11
- Denoising strength: 0.55 - 0.6
- Clip skip: 2
- Hires upscale: 2
- Hires steps: 15 - 20
- Hires upscaler: Latent (nearest)
- VAE: <a href="https://huggingface.co/konapieces/LastpieceCore/blob/main/vae/lastpiece.vae.brightness.safetensors" target="_blank">lastpiece.vae.brightness</a>
# 出力サンプル (Sample)
[Prompt]<br>
```
absurdres,highres,beautiful eyes,detailed background,best quality,
BREAK
(1 girl, solo:1.5),bobcut,brown hair,brown eyes:1.3,[smile],(off-shoulder frill blouse and tight jeans),
BREAK
summer,shiny,night in the city
```
[Negative prompt]<br>
```
EasyNegativeV2,[:(negative_hand-neg:1.2):15],(worst quality:1.5),(low quality:1.5),(normal quality:1.5),
(monochrome),(grayscale),(watermark),(white letters),signature,username,text,error,(manicure),(nsfw)
```
<img src="https://huggingface.co/konapieces/VoidnoiseCore/resolve/main/images/F0636/sample1.jpg" width="512">
<img src="https://huggingface.co/konapieces/VoidnoiseCore/resolve/main/images/F0636/sample2.jpg" width="512">
----
# マージ使用モデル (Base model)
dreamshaper_6NoVae (Lykon) - https://civitai.com/models/4384<br>
Evt_V4_e10_ema (haor) - https://huggingface.co/haor/Evt_V4-preview<br>
trinart_characters_it4_v1 (Sta, AI Novelist Dev, Bit192, Inc.) - https://huggingface.co/naclbit/trinart_characters_19.2m_stable_diffusion_v1<br>
sdhk_V40 (hakoniwa) - https://civitai.com/models/82813<br>
BracingEvoMix_v1(sazyou_roukaku) - https://huggingface.co/sazyou-roukaku/BracingEvoMix<br>
iroiro-lora (2vXpSwA7) - https://huggingface.co/2vXpSwA7/iroiro-lora<br>
</div>
</details>
<details>
<summary>VoidnoiseCore_R0829</summary>
<div>
# 本モデルの概要 (Overview of this model)
本モデルは<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" target="_blank">CreativeML OpenRAIL-M</a>ライセンスの訓練モデル、SD1.5追加学習モデル、またそれらのみを使用したマージモデルで製作したモデルになります。<br>
The model will be produced using only the CreativeML OpenRAIL-M licensed training model, the SD1.5 additional training model, and a merged model using only those models.<br>
フォト調を基軸に、人物は写実的に描かれるよう調整し、公開予定だったvoidnoiseMix系列を意識して製作されています。<br>
The work is based on a realistic style, with realism-like figures, and was created with the voidnoiseMix series, which was scheduled to be released to the public.<br>
クオリティ系のLoRAやLyCORISを使用せずとも、ある程度精細な表現ができるよう調整しています。<br>
Adjustments are made to achieve a certain degree of detailed expression without the use of quality systems LoRA or LyCORIS.<br>
# 推奨設定 (Recommended settings)
- Sampler: DPM++ SDE Karras
- Step: 32 - 38
- CFG scale: 11 - 13
- Denoising strength: 0.5 - 0.55
- Clip skip: 2
- Hires upscale: 2
- Hires steps: 13 - 18
- Hires upscaler: Latent
- VAE: <a href="https://huggingface.co/konapieces/VoidnoiseCore/blob/main/models/R0829/vae/voidnoise.vae.tuned.safetensors" target="_blank">voidnoise.vae.tuned</a>
# 出力サンプル (Sample)
[Prompt]<br>
```
(RAW photo, best quality),(realistic, photo-realistic:1.3),masterpiece,an extremely delicate and beautiful,extremely detailed,8k wallpaper,Amazing,finely detail,ultra-detailed,highres,extremely detailed,
BREAK
1girl,smile,beautiful detailed girl,japanese girl,(black eyes:1.5),extremely detailed eyes and face,looking at viewer,black short hair,(asymmetrical bangs:1.5),petite,longeyelashes,dropping eyes,
BREAK
shot from behind,windy,night view in the city
```
[Negative prompt]<br>
```
EasyNegativeV2,[:(negative_hand-neg:1.2):15],(worst quality:1.5),(low quality:1.5),(normal quality:1.5),
(monochrome),(grayscale),(watermark),(white letters),signature,username,text,error,(manicure),(nsfw)
```
<img src="https://huggingface.co/konapieces/VoidnoiseCore/resolve/main/images/R0829/sample1.jpg" width="512">
[Prompt]<br>
```
(RAW photo, best quality),(realistic, photo-realistic:1.5),an extremely delicate and beautiful,extremely detailed ,8k wallpaper,Amazing,finely detail,ultra-detailed,highres,extremely detailed,
BREAK
(1man:1.5),smile,short hair,black eyes,japanese man,looking at viewer,black short hair,
BREAK
windy,night view in the city,
```
[Negative prompt]<br>
```
EasyNegativeV2,[:(negative_hand-neg:1.2):15],(worst quality:1.5),(low quality:1.5),(normal quality:1.5),
(monochrome),(grayscale),(watermark),(white letters),signature,username,text,error,(manicure),(nsfw)
```
<img src="https://huggingface.co/konapieces/VoidnoiseCore/resolve/main/images/R0829/sample2.jpg" width="512">
----
# マージ使用モデル (Base model)
VoidnoiseCore_F0636 (konapieces) - https://huggingface.co/konapieces/VoidnoiseCore<br>
BracingEvoMix_v1(sazyou_roukaku) - https://huggingface.co/sazyou-roukaku/BracingEvoMix<br>
OpenBra, Bra6-2(beta) (Bankai) - https://huggingface.co/BanKaiPls/AsianModel<br>
dreamshaper_8 (Lykon) - https://civitai.com/models/4384?modelVersionId=128713<br>
epicrealism_pureEvolutionV3 (epinikion) - https://civitai.com/models/25694?modelVersionId=105035<br>
# VAEサンプル (VAE Sample)
<img src="https://huggingface.co/konapieces/VoidnoiseCore/resolve/main/images/vae/vaeplot.png">
# VAE使用モデル (VAE base model)
vae-ft-mse-840000-ema-pruned (StabilityAI) - https://huggingface.co/stabilityai/sd-vae-ft-mse-original<br>
lastpiece.vae.brightness (konapieces) - https://huggingface.co/konapieces/LastpieceCore/<br>
lastpiece.vae.contrast (konapieces) - https://huggingface.co/konapieces/LastpieceCore/<br>
</div>
</details>
<details>
<summary>VoidnoiseCore_R1311</summary>
<div>

# 本モデルの概要 (Overview of this model)
本モデルは<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" target="_blank">CreativeML OpenRAIL-M</a>ライセンスの訓練モデル、SD1.5追加学習モデル、またそれらのみを使用したマージモデルで製作したモデルになります。<br>
The model will be produced using only the CreativeML OpenRAIL-M licensed training model, the SD1.5 additional training model, and a merged model using only those models.<br>
VoidnoiseCore R1311は、LastpieceCore S1018をベースにフォト調に調整し、目力が強い成人女性に特化したモデルになります。<br>
VoidnoiseCore R1311 is based on LastpieceCore S1018, adjusted to a photo style and specially designed for adult women with beautiful eyes.<br>
ファッション雑誌風の女性のような人物が精細に描かれ、かつ陰影を強く発するよう調整することで、より現実感を引き立たせる調整を行っております。<br>
Adjustments have been made to enhance the sense of reality by adjusting the figures so that they are depicted in detail and emit strong shadows, like a woman in a fashion magazine style.<br>
# 推奨設定 (Recommended settings)
- Sampler: DPM++ SDE Karras
- Step: 30 - 35
- CFG scale: 11.0 - 11.5
- Denoising strength: 0.6 - 0.65
- Clip skip: 2
- Hires upscale: 2
- Hires steps: 15 - 18
- Hires upscaler: Latent (nearest)
- ENSD (Eta noise seed delta): 31337
- VAE: <a href="https://huggingface.co/konapieces/VoidnoiseCore/blob/main/models/R0829/vae/voidnoise.vae.tuned.safetensors" target="_blank">voidnoise.vae.tuned</a>
- Embeddings: <a href="https://civitai.com/models/56519/negativehand-negative-embedding" target="_blank">negative_hand</a>
# 出力サンプル (Sample)
[Prompt]<br>
```
(RAW photo, best quality),(realistic, photo-realistic:1.5),an extremely delicate and beautiful,extremely detailed,
8k,amazing,ultra-detailed,highres,ray tracing,ultra defined shadow,
BREAK
1 beautiful woman,smile,brown straight hair,swept bangs,brown eyes,kawaii,cute,looking at viewer,(thigh:1.5),
(drooping eyes:1.5),thin eyebrow,lipgloss,(large breasts:1.5),extremely detailed eyes and face,petite,longeyelashes,
BREAK
(ultra low-rise long jeans:1.3), (blouse:1.4),back shot
```
[Negative prompt]<br>
```
negative_hand-neg,(worst quality:1.5),(low quality:1.5),(normal quality:1.5),(monochrome),(grayscale),
(watermark),(white letters),signature,username,text,error,(manicure),(nsfw)
```
<img src="https://huggingface.co/konapieces/VoidnoiseCore/resolve/main/images/R1311/sample1.png" width="512">
[Prompt]<br>
```
(RAW photo, best quality),(realistic, photo-realistic:1.5),an extremely delicate and beautiful,extremely detailed,
8k,amazing,ultra-detailed,highres,
BREAK
1 beautiful woman,smile,brown straight hair,swept bangs,brown eyes,kawaii,cute,looking at viewer,(thigh:1.5),
(drooping eyes:1.5),thin eyebrow,lipgloss,(large breasts:1.5),(off-shoulder blouse and skinny jeans:1.4),
extremely detailed eyes and face,petite,longeyelashes,
BREAK
(checkerboard floor:1.2), bookcase, indoor, chandelier, window shade, armchair,cloudy,suck thumb
```
[Negative prompt]<br>
```
negative_hand-neg,(worst quality:1.5),(low quality:1.5),(normal quality:1.5),(monochrome),(grayscale),
(watermark),(white letters),signature,username,text,error,(manicure),(nsfw)
```
<img src="https://huggingface.co/konapieces/VoidnoiseCore/resolve/main/images/R1311/sample2.png" width="512">
----
# マージ使用モデル (Base model)
LastpieceCore_S1018 (konapieces) - https://huggingface.co/konapieces/LastpieceCore<br>
VoidnoiseCore_R0829 (konapieces) - https://huggingface.co/konapieces/VoidnoiseCore<br>
BracingEvoMix_v1(sazyou_roukaku) - https://huggingface.co/sazyou-roukaku/BracingEvoMix<br>
flat1 (2vXpSwA7) - https://civitai.com/models/81273/flat1<br>
BetterCuteAsian (antonio_riolo2610) - https://civitai.com/models/80460?modelVersionId=85359<br>
使用した全てのモデルのサブライセンス関係は、<br>
1. Share merges using this model<br>
(このモデルを使用したマージモデルを共有・配布する)<br>
2. Have different permissions when sharing merges<br>
(このモデルをマージしたモデルに異なる権限を設定する)<br>
が許可されているモデルを使用しております。<br>
ベースモデル自体には、一部サブライセンス制限付き※1のモデルがありますので、使用及びその派生モデル公開については、サブライセンスに沿って公開しております。<br>
上記項2により、マージ後のサブライセンス継承は除外されておりますので、フルライセンスのCreativeML OpenRAIL-Mを適用しております。<br>
※1:Use the model without crediting the creator(著作者表示を入れずにモデルを使用する)が許可されていないモデルがある為、使用及び著作者を明示しています。<br>
Sublicensing relationships for all models used,<br>
1. Share merges using this model<br>
2. Have different permissions when sharing merges<br>
are allowed to use the model.<br>
The base model itself has some models with sublicense restrictions*1, so the use and publication of the derived models are released in compliance with the sublicense.<br>
Since the above section 2 excludes the inheritance of sublicenses after merging, the full license CreativeML OpenRAIL-M is applied.<br>
</div>
</details>
<details>
<summary>VoidnoiseCore_F1725</summary>
<div>

# ▼ 本モデルの概要 (Overview of this model)
本モデルは<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" target="_blank">CreativeML OpenRAIL-M</a>ライセンスの訓練モデル、SD1.5追加学習モデルのみを使用したマージモデルになります。<br>
VoidnoiseCore F1725は、前作のVoidnoiseCore R1311をベースに作成されています。<br>
LoRAやLyCORISを使用しなくても、目力が強い人物を描写することに特化しており、フォトグラフィ調の構図を得意としています。<br>
今回より、アジア女性に限らず、欧米女性などの描写も美しく描写することができる、前作の完全上位互換を目指して作成しています。<br>
<br>
This model will be a merged model, using only the training model and SDXL1.0 additional learning model from the <a href="https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md" target="_blank">CreativeML OpenRAIL++-M</a> license.<br>
VoidnoiseCore F1725 is based on the previous VoidnoiseCore R1311.<br>
It specializes in depicting people with strong eyes without using LoRA or LyCORIS, and excels in photographic composition.<br>
From this time onward, we are creating a fully upward compatible version of our previous work that can beautifully depict not only Asian women but also Western women and other types of women.<br>
# ▼ 推奨設定 (Recommended settings)
- Sampler: Restart
- Step: 35 - 40
- CFG scale: 11.0 - 12.0
- Denoising strength: 0.55 - 0.65
- Clip skip: 2
- Hires upscale: 2
- Hires steps: 15 - 20
- Hires upscaler: R-ESRGAN 4x+
- ENSD (Eta noise seed delta): 31337
- VAE: <a href="https://huggingface.co/konapieces/VoidnoiseCore/blob/main/models/R0829/vae/voidnoise.vae.tuned.safetensors" target="_blank">voidnoise.vae.tuned</a>
- Embeddings: <a href="https://civitai.com/models/56519/negativehand-negative-embedding" target="_blank">negative_hand</a>
- Extensions : FreeU, Tiled VAE, Tiled Diffusion
# ▼ 出力サンプル (Sample)
[Prompt]<br>
```
4k wallpaper,highres,kawaii,cute,1 girl, solo,updo, bangs,from adove,close mouth, light smile,
Cargo pants, cropped t-shirt, and combat boots,large breasts,longeyelashes,(eyeshadow),
simple background
```
[Negative prompt]<br>
```
EasyNegativeV2,negative_hand-neg,(worst quality, bad quality:1.4), (extra fingers, deformed hands, polydactyl:1.5), (bad hands, bad fingers, bad arm, missing finger:1.5), text, nsfw
```
<img src="https://huggingface.co/konapieces/VoidnoiseCore/resolve/main/images/F1725/sample1.png" width="512">
[Prompt]<br>
```
4k wallpaper,highres,kawaii,cute,1 beautiful woman, solo,wavy, bangs,close view,from below,
(happy smile:1),(Wrap maxi dress),earing,large breasts,longeyelashes,(eyeshadow),
BREAK
((bokeh,night):1.37)
```
[Negative prompt]<br>
```
(EasyNegative:1.2), [:(bad-hands-5:0.8):20], ((paintings,sketches)), (worst quality:2), lowres, ((monochrome)), ((grayscale)) ,cartoon, red-eye, Animal Ear, (nipple:1.15), hair ornaments,freckles
```
<img src="https://huggingface.co/konapieces/VoidnoiseCore/resolve/main/images/F1725/sample2.png" width="512">
</div>
</details>
<details>
<summary>VoidnoiseCore_R1733</summary>
<div>

# ▼ 本モデルの概要 (Overview of this model)
本モデルは<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" target="_blank">CreativeML OpenRAIL-M</a>ライセンスの訓練モデル、SD1.5追加学習モデルのみを使用したマージモデルになります。<br>
VoidnoiseCore R1733は、前作のVoidnoiseCore R1311をベースに作成されています。<br>
LoRAやLyCORISを使用しなくても、目力が強い人物を描写することに特化しており、フォトグラフィ調の構図を得意としています。<br>
今回より、アジア女性に限らず、欧米女性などの描写も美しく描写することができる、前作の完全上位互換を目指して作成しています。<br>
<br>
This model will be a merged model, using only the training model and SDXL1.0 additional learning model from the <a href="https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md" target="_blank">CreativeML OpenRAIL++-M</a> license.<br>
VoidnoiseCore R1733 is based on the previous VoidnoiseCore R1311.<br>
It specializes in depicting people with strong eyes without using LoRA or LyCORIS, and excels in photographic composition.<br>
From this time onward, we are creating a fully upward compatible version of our previous work that can beautifully depict not only Asian women but also Western women and other types of women.<br>
# ▼ 推奨設定 (Recommended settings)
- Sampler: Restart
- Step: 35 - 40
- CFG scale: 11.0 - 12.0
- Denoising strength: 0.55 - 0.65
- Clip skip: 2
- Hires upscale: 2
- Hires steps: 15 - 20
- Hires upscaler: R-ESRGAN 4x+
- ENSD (Eta noise seed delta): 31337
- VAE: <a href="https://huggingface.co/konapieces/VoidnoiseCore/blob/main/models/R0829/vae/voidnoise.vae.tuned.safetensors" target="_blank">voidnoise.vae.tuned</a>
- Embeddings: <a href="https://civitai.com/models/56519/negativehand-negative-embedding" target="_blank">negative_hand</a>
- Extensions : FreeU, Tiled VAE, Tiled Diffusion
# ▼ 出力サンプル (Sample)
[Prompt]<br>
```
(realistic, photo-realistic:1.37), (masterpiece), (best quality:1.4), (ultra high res:1.2),(RAW photo:1.2), (sharp focus:1.3), (face focus:1.2), vibrant details, hyperrealistic, elegant,
kawaii,cute,(1 woman, solo),inverted bob, bangsamethyst hair,azure eyes,drooping eyes,longeyelashes,eyeshadow,eyeliner,(smile),((hipster, loose turtleneck sweater, overcoat):1.37),large breasts,
BREAK
park,fallin snow, simple background, ((night,bokeh):1.37)
```
[Negative prompt]<br>
```
EasyNegativeV2,negative_hand-neg,(worst quality, bad quality:1.4), (extra fingers, deformed hands, polydactyl:1.5), (bad hands, bad fingers, bad arm, missing finger:1.5), text, nsfw
```
<img src="https://huggingface.co/konapieces/VoidnoiseCore/resolve/main/images/R1733/sample1.png" width="512">
[Prompt]<br>
```
(realistic, photo-realistic:1.37), (masterpiece), (best quality:1.4), (ultra high res:1.2),(RAW photo:1.2), (sharp focus:1.3), (face focus:1.2), vibrant details, hyperrealistic, elegant,
kawaii,cute,(1 woman, solo),french twist, bangsamethyst hair,azure eyes,drooping eyes,longeyelashes,eyeshadow,eyeliner,earrings,(smile),
((A polka dot swing dress with a sweetheart neckline):1.37),
BREAK
simple background, ((night,bokeh):1.37),
```
[Negative prompt]<br>
```
EasyNegativeV2,negative_hand-neg,(worst quality, bad quality:1.4), (extra fingers, deformed hands, polydactyl:1.5), (bad hands, bad fingers, bad arm, missing finger:1.5), text, nsfw
```
<img src="https://huggingface.co/konapieces/VoidnoiseCore/resolve/main/images/R1733/sample2.png" width="512">
</div>
</details>
----
# 免責事項 (Disclaimer)
- 本モデルを使用して作成された画像に関しては、個々の利用者に委ねておりますので、生成された画像に関する如何なる問題や係争について、モデル製作者は一切の責任を負いません。
- 本モデルはアダルトコンテンツを目的とした用途を想定しておりません。成人向けコンテンツを生成し、発生した問題についてはモデル製作者は一切の責任を負いません。
- ライセンスに関して問題が発生した場合は、本モデルを予告なく削除させて頂く可能性があります。ご了承ください。
- 犯罪への利用や医療用などの専門的な用途への使用は禁止されております。ライセンス不履行による過失については、モデル製作者は一切の責任を負いません。
- CreativeML OpenRAIL ライセンスの特性上、モデル及び派生モデルにおける販売を許可しておりますが、現状オープンアクセスライセンスである為、モデルの販売は推奨致しません。著作者に無断でモデル販売を行った際に生じたいかなる問題もモデル製作者は一切責任を負いません。
- The model creator assumes no liability for any problems or disputes related to the images created using this model.
- This model is not intended for use with adult content. The model creator assumes no liability for any problems that may occur as a result of generating adult-oriented content.
- In the event of any licensing issues, this model may be removed without notice. We appreciate your understanding.
- Use for criminal offenses or for professional purposes such as medical use is prohibited. The model maker is not liable for any negligence due to non-fulfillment of the license.
- The CreativeML OpenRAIL license permits the sale of models and derivatives, but does not recommend the sale of models because it is currently an open access license. The creator of the model will not be held responsible for any problems that may arise from the sale of the model without the author's permission.
---
# ライセンス (License)
このモデルはオープンアクセスであり、すべての人が利用できます。CreativeML OpenRAIL-M ライセンスにより、権利と使用方法がさらに規定されています。<br>
CreativeML OpenRAIL ライセンスでは、次のことが規定されています。<br>
1. モデルを使用して、違法または有害な出力またはコンテンツを意図的に作成または共有することはできません。<br>
2. 作成者は、あなたが生成した出力に対していかなる権利も主張しません。あなたはそれらを自由に使用でき、ライセンスに設定された規定に違反してはならない使用について説明責任を負います。<br>
3. 重みを再配布し、モデルを商用および/またはサービスとして使用することができます。<br>
その場合、ライセンスに記載されているのと同じ使用制限を含め、<br>
CreativeML OpenRAIL-M のコピーをすべてのユーザーと共有する必要があることに注意してください。 (ライセンスを完全にかつ慎重にお読みください。) <br>
[こちら](https://huggingface.co/spaces/CompVis/stable-diffusion-license)からライセンス全文をお読みください。<br>
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:<br>
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content<br>
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license<br>
3. You may re-distribute the weights and use the model commercially and/or as a service. <br>
If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) <br>
Please read the full license [here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)<br>
# VAEライセンス(VAE License)
また、同梱しているVAEは、<a href="https://huggingface.co/stabilityai/sd-vae-ft-mse-original" target="_blank">vae-ft-mse-840000-ema-pruned</a>をベースに作成されております。<br>
その為、<a href="https://huggingface.co/hakurei/waifu-diffusion-v1-4" target="_blank">kl-f8-anime2</a>の<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" target="_blank">CreativeML OpenRAIL-M</a>以外に、継承元である
<a href="https://huggingface.co/stabilityai/sd-vae-ft-mse-original" target="_blank">vae-ft-mse-840000-ema-pruned</a>のMIT Licenseも適用となる為、適用ライセンスが異なる点を留意してください。<br>
それに伴い、とーふのかけらが追加著作者として追記しています。<br>
適用ライセンスは以下になります。<br>
The included VAE is based on vae-ft-mse-840000-ema-pruned.<br>
Please note that in addition to CreativeML OpenRAIL-M, the inherited MIT License is also applicable.<br>
With this, konapieces has been added as an additional author.<br>
The applicable licenses are as follows: <br>
Copyright 2022 StabilityAI, Additional Copyright 2023 konapieces<br>
Release under the MIT and CreativeML OpenRAIL-M licenses. <br>
<a href="https://huggingface.co/konapieces/VoidnoiseCore/blob/main/LICENSE.md" target="_blank">MIT License</a><br>
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" target="_blank">CreativeML OpenRAIL-M</a>
---
# 製作者 (The creator of this model)
とーふのかけら(konapieces)<br>
twitter: <a href="https://twitter.com/konapieces" target="_blank"> @konapieces</a><br>
Website: <a href="https://lit.link/konapieces" target="_blank">https://lit.link/konapieces</a>
--- |
SparseLLM/swiglu-90B | SparseLLM | 2024-01-13T13:04:02Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"endpoints_compatible",
"region:us"
] | null | 2024-01-13T13:02:19Z | Found. Redirecting to https://cdn-lfs-us-1.hf.co/repos/37/cd/37cd53a70fd408c6512496676cc183c5bdebb58bbcb04dbd0e2065c3363d599a/3e0e15fa0c5cc81675bd69af8eb469d128a725c1a7bfc71f03b7877b7b650567?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1739041899&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczOTA0MTg5OX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zLzM3L2NkLzM3Y2Q1M2E3MGZkNDA4YzY1MTI0OTY2NzZjYzE4M2M1YmRlYmI1OGJiY2IwNGRiZDBlMjA2NWMzMzYzZDU5OWEvM2UwZTE1ZmEwYzVjYzgxNjc1YmQ2OWFmOGViNDY5ZDEyOGE3MjVjMWE3YmZjNzFmMDNiNzg3N2I3YjY1MDU2Nz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=Y73EgNwPZsUPLWXLqmBHbGTWmSEfmGKIZHfzOCrVuLyroaN8ALV4srZgyitsHlKQWFXo36Jl1O31%7Ezh6N4DOaJTH7GaxObSjSmD1AdJRuiRPohFwPaIbBEOYpWyVHjedj4NBZQA6VIsrEzM6lhmHBiYs0Wo64cLA7df00uWHZjKrhL5OfqY2aM71V9kwqukUac44kHc97S7YFjkCjtq45E6kKL-xzHZAhtopiR85cA9c9BCEBhj-fN9d7mXDzgauOeEn9aiUwmLyWyP%7EfM99C9K598Bm0P2Fo8BzgRewI9VG4r%7ElX57WVPD8AdBLQNgtGiymklFQvmHbax63ZIX-Iw__&Key-Pair-Id=K24J24Z295AEI9 |
kzipa/distilhubert-finetuned-gtzan | kzipa | 2024-01-13T13:03:49Z | 149 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:kzipa/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-01-13T09:37:45Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- kzipa/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: kzipa/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.85
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6442
- Accuracy: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9367 | 1.0 | 113 | 1.8360 | 0.51 |
| 1.1947 | 2.0 | 226 | 1.2392 | 0.66 |
| 0.902 | 3.0 | 339 | 1.0664 | 0.69 |
| 0.7442 | 4.0 | 452 | 0.8041 | 0.8 |
| 0.5996 | 5.0 | 565 | 0.7320 | 0.8 |
| 0.372 | 6.0 | 678 | 0.7032 | 0.79 |
| 0.3338 | 7.0 | 791 | 0.7086 | 0.83 |
| 0.2011 | 8.0 | 904 | 0.6247 | 0.83 |
| 0.1431 | 9.0 | 1017 | 0.6092 | 0.83 |
| 0.1026 | 10.0 | 1130 | 0.6442 | 0.85 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
traxck/mookysw | traxck | 2024-01-13T13:03:29Z | 0 | 0 | null | [
"region:us"
] | null | 2024-01-13T12:53:25Z | just a lora for Rose Blackpink |
yuanzhoulvpi/intermlm-7b-lml_001 | yuanzhoulvpi | 2024-01-13T12:55:23Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"internlm",
"feature-extraction",
"text-generation",
"custom_code",
"zh",
"dataset:yuanzhoulvpi/rename_robot",
"region:us"
] | text-generation | 2024-01-13T00:21:07Z | ---
datasets:
- yuanzhoulvpi/rename_robot
language:
- zh
pipeline_tag: text-generation
---
1. 使用lora,给internlm模型做训练
2. 训练的时候,如何让模型知道自己的身份,并且对相关问题进行拒绝回答。这里给到相关解决方案。
## 模型效果
这里给大家看一下,使用我这个方法训练的模型效果:


可以看得出来:
1. 模型有非常明显的自我认知能力;
2. 模型懂得拒绝回答;
3. 模型对于别的问题,回答的也还可以;
## 训练脚本介绍
GitHub训练代码:[https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/internlm-sft](https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/internlm-sft) |
kar-saaragh/poca-SoccerTwos | kar-saaragh | 2024-01-13T12:51:19Z | 18 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-01-13T12:50:50Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: kar-saaragh/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Elvira0/Art | Elvira0 | 2024-01-13T12:49:17Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2024-01-11T15:17:05Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
agustoslu/data_experiment | agustoslu | 2024-01-13T12:45:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:ybelkada/mistral-7b-instruct-v0.1-sharded",
"base_model:adapter:ybelkada/mistral-7b-instruct-v0.1-sharded",
"region:us"
] | null | 2024-01-13T12:41:35Z | ---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: ybelkada/mistral-7b-instruct-v0.1-sharded
model-index:
- name: data_experiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data_experiment
This model is a fine-tuned version of [ybelkada/mistral-7b-instruct-v0.1-sharded](https://huggingface.co/ybelkada/mistral-7b-instruct-v0.1-sharded) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0 |
SparseLLM/swiglu-85B | SparseLLM | 2024-01-13T12:45:14Z | 91 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"endpoints_compatible",
"region:us"
] | null | 2024-01-13T11:55:04Z | Found. Redirecting to https://cdn-lfs-us-1.hf.co/repos/b1/b3/b1b3cde366b017298c6839ccaebfcbe272aa35fd41175da1480502532b9e4dc0/3e0e15fa0c5cc81675bd69af8eb469d128a725c1a7bfc71f03b7877b7b650567?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1739041855&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczOTA0MTg1NX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zL2IxL2IzL2IxYjNjZGUzNjZiMDE3Mjk4YzY4MzljY2FlYmZjYmUyNzJhYTM1ZmQ0MTE3NWRhMTQ4MDUwMjUzMmI5ZTRkYzAvM2UwZTE1ZmEwYzVjYzgxNjc1YmQ2OWFmOGViNDY5ZDEyOGE3MjVjMWE3YmZjNzFmMDNiNzg3N2I3YjY1MDU2Nz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=S58rPicayEtzkoMP99RZBlRJaFk0MaEgpeGhq6O-B1h4bwc1KC8RQ9IuNoLq0PUVCKv-MypgHHGZbLXRsok3clyL4euIIonWypkZFS7Xb%7E-eBDP3bhMZXelGEPQQAlOREeBCfTQfDUtaaI%7EMlFbvhTDnOEtga1f7eN1iTzJI7uKG2lKJpFp8FnGOtnXjoyJeTlrkwsg9gSkNPKTh9q-hCPaMyQhCDTgkf62u42p26UnTmRaZ%7EccAVmkjnBxUZtSRQAd8UXb3gFPMle5MWkUb%7E1dKgm9jdxV-xlxk9h%7Em0CD4C9wmKKou7wTtHYnOWKk9H5c18DMJ-4zp9rtPxVf0lg__&Key-Pair-Id=K24J24Z295AEI9 |
MaziyarPanahi/dolphin-2.1-mistral-7b-GPTQ | MaziyarPanahi | 2024-01-13T12:37:31Z | 77 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"quantized",
"4-bit",
"gptq",
"pytorch",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us",
"conversational",
"base_model:cognitivecomputations/dolphin-2.1-mistral-7b",
"base_model:finetune:cognitivecomputations/dolphin-2.1-mistral-7b"
] | text-generation | 2024-01-13T12:35:36Z | ---
license: apache-2.0
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- pytorch
- mistral
- text-generation
- en
- dataset:ehartford/dolphin
- dataset:jondurbin/airoboros-2.2.1
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
model_name: dolphin-2.1-mistral-7b-GPTQ
base_model: cognitivecomputations/dolphin-2.1-mistral-7b
inference: false
model_creator: cognitivecomputations
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Description
[MaziyarPanahi/dolphin-2.1-mistral-7b-GPTQ](https://huggingface.co/MaziyarPanahi/dolphin-2.1-mistral-7b-GPTQ) is a quantized (GPTQ) version of [cognitivecomputations/dolphin-2.1-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.1-mistral-7b)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/dolphin-2.1-mistral-7b-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
``` |
NSTiwari/corgy_home_LoRA | NSTiwari | 2024-01-13T12:31:10Z | 9 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-01-13T12:25:02Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK home interior
license: openrail++
---
# SDXL LoRA DreamBooth - NSTiwari/corgy_home_LoRA
<Gallery />
## Model description
These are NSTiwari/corgy_home_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK home interior to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](NSTiwari/corgy_home_LoRA/tree/main) them in the Files & versions tab.
|
anakib1/multi-whisper-playground | anakib1 | 2024-01-13T12:26:53Z | 99 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-01-12T19:15:50Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: multi-whisper-playground
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multi-whisper-playground
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8508
- Wer: 195.4146
- Acc: 12.6021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------:|
| 2.2601 | 1.0 | 215 | 2.1670 | 210.5479 | 12.2520 |
| 1.8422 | 2.0 | 430 | 1.9717 | 187.2986 | 12.2520 |
| 1.6574 | 3.0 | 645 | 1.8943 | 162.3279 | 12.2520 |
| 1.4953 | 4.0 | 860 | 1.8562 | 187.3132 | 12.1354 |
| 1.4436 | 5.0 | 1075 | 1.8402 | 173.3153 | 12.2520 |
| 1.322 | 6.0 | 1290 | 1.8362 | 184.9692 | 12.4854 |
| 1.2386 | 7.0 | 1505 | 1.8372 | 183.5409 | 12.6021 |
| 1.1252 | 8.0 | 1720 | 1.8430 | 192.3747 | 12.6021 |
| 1.0449 | 9.0 | 1935 | 1.8459 | 198.8866 | 12.6021 |
| 1.0509 | 10.0 | 2150 | 1.8508 | 195.4146 | 12.6021 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
peulsilva/phrase-bert-setfit-sst5-2shots | peulsilva | 2024-01-13T12:26:01Z | 47 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-01-13T12:25:54Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# peulsilva/phrase-bert-setfit-sst5-2shots
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('peulsilva/phrase-bert-setfit-sst5-2shots')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('peulsilva/phrase-bert-setfit-sst5-2shots')
model = AutoModel.from_pretrained('peulsilva/phrase-bert-setfit-sst5-2shots')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=peulsilva/phrase-bert-setfit-sst5-2shots)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 45 with parameters:
```
{'batch_size': 1, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': None}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
quantus17/rise3 | quantus17 | 2024-01-13T12:25:26Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-01-13T11:40:14Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a e6z7a armchair
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
gonmadri/laporta | gonmadri | 2024-01-13T12:20:33Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-01-13T12:18:34Z | ---
license: other
license_name: modelo
license_link: LICENSE
---
|
tranthaihoa/alpaca_Llama2_tuned | tranthaihoa | 2024-01-13T11:58:43Z | 0 | 0 | null | [
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b",
"base_model:finetune:unsloth/llama-2-7b",
"license:llama2",
"region:us"
] | null | 2024-01-12T09:50:50Z | ---
license: llama2
base_model: unsloth/llama-2-7b
tags:
- trl
- sft
- unsloth
- unsloth
- generated_from_trainer
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [unsloth/llama-2-7b](https://huggingface.co/unsloth/llama-2-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.14.1
|
TheBloke/Bagel-Hermes-2x34b-GPTQ | TheBloke | 2024-01-13T11:51:31Z | 14 | 2 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"yi",
"moe",
"conversational",
"base_model:Weyaxi/Bagel-Hermes-2x34B",
"base_model:quantized:Weyaxi/Bagel-Hermes-2x34B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-01-13T00:11:02Z | ---
base_model: Weyaxi/Bagel-Hermes-2x34b
inference: false
license: other
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-license
model_creator: "Ya\u011F\u0131z \xC7al\u0131k"
model_name: Bagel Hermes 2X34B
model_type: mixtral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- yi
- moe
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Bagel Hermes 2X34B - GPTQ
- Model creator: [Yağız Çalık](https://huggingface.co/Weyaxi)
- Original model: [Bagel Hermes 2X34B](https://huggingface.co/Weyaxi/Bagel-Hermes-2x34b)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Yağız Çalık's Bagel Hermes 2X34B](https://huggingface.co/Weyaxi/Bagel-Hermes-2x34b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GGUF)
* [Yağız Çalık's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Weyaxi/Bagel-Hermes-2x34b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 31.84 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 32.99 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 36.50 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 24.35 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 25.45 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 48.99 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 48.97 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Bagel-Hermes-2x34b-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Bagel-Hermes-2x34b-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Bagel-Hermes-2x34b-GPTQ`:
```shell
mkdir Bagel-Hermes-2x34b-GPTQ
huggingface-cli download TheBloke/Bagel-Hermes-2x34b-GPTQ --local-dir Bagel-Hermes-2x34b-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Bagel-Hermes-2x34b-GPTQ
huggingface-cli download TheBloke/Bagel-Hermes-2x34b-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Bagel-Hermes-2x34b-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Bagel-Hermes-2x34b-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Bagel-Hermes-2x34b-GPTQ --local-dir Bagel-Hermes-2x34b-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Bagel-Hermes-2x34b-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Bagel-Hermes-2x34b-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Bagel-Hermes-2x34b-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Bagel-Hermes-2x34b-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Bagel-Hermes-2x34b-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Yağız Çalık's Bagel Hermes 2X34B

# Bagel-Hermes-2x34B
This is the model for Bagel-Hermes-2x34B. I used [mergekit](https://github.com/cg123/mergekit) to make this MOE model.
# Prompt Template(s):
Since [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) uses many prompt templates, and [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) uses ChatML, you can utilize ChatML and other prompt templates provided by bagel.
**Note:** I currently do not know which prompt template is best.
### ChatML:
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system}
{instruction}
### Response:
```
### Vicuna
```
{system}
USER: {instruction}
ASSISTANT:
```
Visit [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) to try more prompt templates.
# Yaml Config to reproduce
```yaml
base_model: nontoxic-bagel-34b-v0.2
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: bagel-dpo-34b-v0.2
positive_prompts: ["question answering", "Q:", science", "biology", "chemistry", "physics"]
- source_model: Nous-Hermes-2-Yi-34B
positive_prompts: ["chat", "math", "reason", "mathematics", "solve", "count", "python", "javascript", "programming", "algorithm", "tell me", "assistant"]
```
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/Bagel-Hermes-2x34B-GPTQ](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-GPTQ)
##### GGUF
- [TheBloke/Bagel-Hermes-2x34B-GGUF](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-GGUF)
##### AWQ
- [TheBloke/Bagel-Hermes-2x34B-AWQ](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-AWQ)
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
|
Weyaxi/Dolphin2.1-OpenOrca-7B | Weyaxi | 2024-01-13T11:49:19Z | 1,558 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-10-11T09:23:18Z | ---
license: apache-2.0
model-index:
- name: Dolphin2.1-OpenOrca-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.91
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.84
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 19.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
---
<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
Merge of [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b) and [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) using ties merge.
### *Weights*
- [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b): 0.5
- [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.3
### *Density*
- [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b): 0.5
- [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.5
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/Dolphin2.1-OpenOrca-7B-GPTQ](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GPTQ)
##### GGUF
- [TheBloke/Dolphin2.1-OpenOrca-7B-GGUF](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF)
##### AWQ
- [TheBloke/Dolphin2.1-OpenOrca-7B-AWQ](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-AWQ)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Dolphin2.1-OpenOrca-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |60.47|
|AI2 Reasoning Challenge (25-Shot)|63.91|
|HellaSwag (10-Shot) |84.26|
|MMLU (5-Shot) |62.66|
|TruthfulQA (0-shot) |53.84|
|Winogrande (5-shot) |78.22|
|GSM8k (5-shot) |19.94|
|
MaziyarPanahi/Mistral-7B-Instruct-v0.1-GPTQ | MaziyarPanahi | 2024-01-13T11:43:28Z | 76 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"quantized",
"4-bit",
"gptq",
"pytorch",
"arxiv:2310.06825",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1"
] | text-generation | 2024-01-12T21:58:42Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.1
inference: false
license: apache-2.0
model_creator: mistralai
model_name: Mistral-7B-Instruct-v0.1-GPTQ
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- pytorch
- safetensors
- mistral
- text-generation
- finetuned
- arxiv:2310.06825
- license:apache-2.0
- autotrain_compatible
- has_space
- text-generation-inference
- region:us
---
# Description
[MaziyarPanahi/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-v0.1-GPTQ) is a quantized (GPTQ) version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python Code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/Mistral-7B-Instruct-v0.1-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
``` |
younoger/YGBNumbersBert-0.5 | younoger | 2024-01-13T11:24:27Z | 94 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:younoger/autotrain-data-YGBNumbersBert-0.5",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-13T11:24:18Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- younoger/autotrain-data-YGBNumbersBert-0.5
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.0019458403112366796
f1: 1.0
precision: 1.0
recall: 1.0
auc: 1.0
accuracy: 1.0
|
kevin009/culturalmixed | kevin009 | 2024-01-13T11:16:17Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-13T03:08:02Z | ---
license: apache-2.0
language:
- en
---
# Model Card: LoRA Configuration Causal Language Model
## Training Specifications
### LoRA Configuration
- **Configuration**: `LoraConfig`
- **Parameters**:
- `r`: 16
- `lora_alpha`: 16
- `lora_dropout`: 0.05
- `bias`: none
- `task_type`: CAUSAL_LM
- `target_modules`: ['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
### Model to Fine-Tune
- **Function**: `AutoModelForCausalLM.from_pretrained`
- **Parameters**:
- `model_name`
- `torch_dtype`: torch.float16
- `load_in_4bit`: True
- **Configurations**:
- `use_cache`: False
### Reference Model
- **Function**: `AutoModelForCausalLM.from_pretrained`
- **Parameters**:
- `model_name`
- `torch_dtype`: torch.float16
- `load_in_4bit`: True
### Training Arguments
- **Function**: `TrainingArguments`
- **Parameters**:
- `per_device_train_batch_size`: 4
- `gradient_accumulation_steps`: 4
- `gradient_checkpointing`: True
- `learning_rate`: 5e-5
- `lr_scheduler_type`: "cosine"
- `max_steps`: 200
- `save_strategy`: "no"
- `logging_steps`: 1
- `output_dir`: new_model
- `optim`: "paged_adamw_32bit"
- `warmup_steps`: 100
- `bf16`: True
- `report_to`: "wandb"
### Create DPO Trainer
- **Function**: `DPOTrainer`
- **Parameters**:
- `model`
- `ref_model`
- `args`: training_args
- `train_dataset`: dataset
- `tokenizer`: tokenizer
- `peft_config`: peft_config
- `beta`: 0.1
- `max_prompt_length`: 1024
- `max_length`: 1536 |
VuongQuoc/1_microsoft_deberta_V1.1 | VuongQuoc | 2024-01-13T11:09:30Z | 79 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"multiple-choice",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-01-13T07:43:45Z | ---
license: mit
base_model: microsoft/deberta-v3-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 1_microsoft_deberta_V1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_microsoft_deberta_V1.1
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7138
- Map@3: 0.8492
- Accuracy: 0.775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map@3 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 1.6141 | 0.03 | 50 | 1.6087 | 0.6242 | 0.51 |
| 1.336 | 0.05 | 100 | 1.1398 | 0.7550 | 0.645 |
| 0.9441 | 0.08 | 150 | 0.8809 | 0.8150 | 0.7 |
| 0.9279 | 0.11 | 200 | 0.7528 | 0.8383 | 0.73 |
| 0.8639 | 0.13 | 250 | 0.7259 | 0.8525 | 0.76 |
| 0.8255 | 0.16 | 300 | 0.7363 | 0.8592 | 0.785 |
| 0.8411 | 0.19 | 350 | 0.7052 | 0.8483 | 0.76 |
| 0.856 | 0.21 | 400 | 0.7097 | 0.8408 | 0.745 |
| 0.7753 | 0.24 | 450 | 0.6860 | 0.8575 | 0.775 |
| 0.7941 | 0.27 | 500 | 0.7146 | 0.8525 | 0.765 |
| 0.8062 | 0.29 | 550 | 0.7138 | 0.8492 | 0.775 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.9.0
- Tokenizers 0.13.3
|
justinwangx/vicuna-adv-robust-ul15-sft-full | justinwangx | 2024-01-13T11:05:31Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-13T06:39:36Z | ---
tags:
- generated_from_trainer
model-index:
- name: vicuna-adv-robust-ul15-sft-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vicuna-adv-robust-ul15-sft-full
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 512
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0543 | 0.61 | 15 | 1.0329 |
| 0.9704 | 1.61 | 30 | 1.0141 |
| 0.9103 | 2.61 | 45 | 1.0125 |
| 0.8485 | 3.61 | 60 | 1.0221 |
| 0.785 | 4.61 | 75 | 1.0448 |
| 0.7207 | 5.61 | 90 | 1.0821 |
| 0.6444 | 6.61 | 105 | 1.1344 |
| 0.5673 | 7.61 | 120 | 1.1993 |
| 0.4883 | 8.61 | 135 | 1.2800 |
| 0.4137 | 9.61 | 150 | 1.3778 |
| 0.345 | 10.61 | 165 | 1.4092 |
| 0.3022 | 11.61 | 180 | 1.5371 |
| 0.2649 | 12.61 | 195 | 1.5054 |
| 0.2272 | 13.61 | 210 | 1.5542 |
| 0.1929 | 14.61 | 225 | 1.6869 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0a0+32f93b1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
prashrex/airsewa3 | prashrex | 2024-01-13T10:49:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-01-13T07:52:22Z | ---
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
sehyun66/Tiny-lama-1.3B-chat-ppo | sehyun66 | 2024-01-13T10:44:32Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"humman feedback",
"HH-RLHF",
"PPO",
"lama-1.3B",
"question-answering",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-01-13T10:25:59Z | ---
license: apache-2.0
datasets:
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
library_name: transformers
pipeline_tag: question-answering
tags:
- humman feedback
- HH-RLHF
- PPO
- lama-1.3B
---
# RLHF with ppo_Trainer and Lora


# Hyperparameter
#ppo
learning_rate=5e-6,
batch_size=32,
mini_batch_size=1,
horizon=10000,
cliprange =0.2,
cliprange_value=0.2,
lam=0.95,
target_kl=2,
use_score_scaling = True,
log_with='wandb'
#lora
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
|
Drawin/DDPM-Street-Scene-Repaint | Drawin | 2024-01-13T10:42:22Z | 1 | 0 | diffusers | [
"diffusers",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-01-13T10:30:09Z | ---
license: mit
---
Pretrain model for https://github.com/Draw1n/DDPM-Street-Scene-Repaint
iamge_size = (128x128), batch = 16, epoch = 250, 8 hours training
Put the model.bin and json into the ./output/ |
version-control/tensorflow-1.0-oss-starcoderbase-1b-prefix | version-control | 2024-01-13T10:26:48Z | 3 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigcode/starcoderbase-1b",
"base_model:adapter:bigcode/starcoderbase-1b",
"region:us"
] | null | 2024-01-13T10:26:45Z | ---
library_name: peft
base_model: bigcode/starcoderbase-1b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Samuael/asr-amharic-phoneme-based-39-3 | Samuael | 2024-01-13T10:25:47Z | 64 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:Samuael/asr-amharic-phoneme-based-39-3",
"base_model:finetune:Samuael/asr-amharic-phoneme-based-39-3",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-12-29T12:40:40Z | ---
base_model: Samuael/asr-amharic-phoneme-based-39-3
tags:
- generated_from_trainer
model-index:
- name: asr-amharic-phoneme-based-39-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asr-amharic-phoneme-based-39-3
This model is a fine-tuned version of [Samuael/asr-amharic-phoneme-based-39-3](https://huggingface.co/Samuael/asr-amharic-phoneme-based-39-3) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4962
- eval_wer: 0.4152
- eval_phoneme_cer: 0.0897
- eval_cer: 0.1396
- eval_runtime: 22.8954
- eval_samples_per_second: 15.68
- eval_steps_per_second: 1.965
- epoch: 2.94
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
margenai/Llama-2-7b-chat-ecommerce-faq-hf | margenai | 2024-01-13T10:25:10Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"region:us"
] | null | 2024-01-13T10:24:49Z | ---
library_name: peft
base_model: Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
diffusionai/diffusionbrush | diffusionai | 2024-01-13T10:02:43Z | 3 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-04T12:14:06Z | ---
license: creativeml-openrail-m
--- |
Qwen/Qwen-tokenizer | Qwen | 2024-01-13T09:53:50Z | 0 | 15 | null | [
"license:other",
"region:us"
] | null | 2024-01-13T09:49:21Z | ---
license: other
license_name: tongyi-qianwen-license
license_link: >-
https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
---
|
mojuss/phi-2-gpt-exam-15 | mojuss | 2024-01-13T09:49:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"region:us"
] | null | 2024-01-13T09:49:36Z | ---
library_name: peft
base_model: microsoft/phi-2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
itzNoBiTa/mbart-Translation-En-Ur | itzNoBiTa | 2024-01-13T09:43:32Z | 91 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"ur",
"en",
"dataset:opus100",
"license:unknown",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-13T15:44:03Z | ---
license: unknown
datasets:
- opus100
language:
- ur
- en
widget:
- text: Translate 'My name is Ahmad and i study in UET' to Urdu.
- text: Translate 'I am Human' to Urdu.
---
This is the basic version only finetuned on 5k data you can further finetune it to get much higher score |
skyfucker/LunarLander-v2 | skyfucker | 2024-01-13T09:36:02Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-13T09:35:43Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.25 +/- 16.89
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ntc-ai/SDXL-LoRA-slider.macro-close-up-shot | ntc-ai | 2024-01-13T09:22:05Z | 29 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2024-01-13T09:22:02Z |
---
language:
- en
thumbnail: "images/evaluate/macro close-up shot.../macro close-up shot_17_3.0.png"
widget:
- text: macro close-up shot
output:
url: images/macro close-up shot_17_3.0.png
- text: macro close-up shot
output:
url: images/macro close-up shot_19_3.0.png
- text: macro close-up shot
output:
url: images/macro close-up shot_20_3.0.png
- text: macro close-up shot
output:
url: images/macro close-up shot_21_3.0.png
- text: macro close-up shot
output:
url: images/macro close-up shot_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "macro close-up shot"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - macro close-up shot (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/macro close-up shot_17_-3.0.png" width=256 height=256 /> | <img src="images/macro close-up shot_17_0.0.png" width=256 height=256 /> | <img src="images/macro close-up shot_17_3.0.png" width=256 height=256 /> |
| <img src="images/macro close-up shot_19_-3.0.png" width=256 height=256 /> | <img src="images/macro close-up shot_19_0.0.png" width=256 height=256 /> | <img src="images/macro close-up shot_19_3.0.png" width=256 height=256 /> |
| <img src="images/macro close-up shot_20_-3.0.png" width=256 height=256 /> | <img src="images/macro close-up shot_20_0.0.png" width=256 height=256 /> | <img src="images/macro close-up shot_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
macro close-up shot
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.macro-close-up-shot', weight_name='macro close-up shot.safetensors', adapter_name="macro close-up shot")
# Activate the LoRA
pipe.set_adapters(["macro close-up shot"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, macro close-up shot"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1080+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
YunzheLv/x-llama-ar-7b | YunzheLv | 2024-01-13T09:15:53Z | 8 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ar",
"en",
"dataset:tatsu-lab/alpaca",
"dataset:news_commentary",
"arxiv:2308.04948",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-11-09T11:18:59Z | ---
license: apache-2.0
datasets:
- tatsu-lab/alpaca
- news_commentary
language:
- ar
- en
metrics:
- bleu
- bleurt
- comet
pipeline_tag: text-generation
---
# Extrapolating Large Language Models to Non-English by Aligning Languages
This repository contains the code implementation for the project that aims to empower pre-trained Large Language Models (LLMs) on non-English languages by building semantic alignment across languages. The project explores cross-lingual instruction-tuning and multilingual instruction-tuning techniques. The code implementation is based on [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca).

## Requirements and Installation
To install this repository, follow these steps:
```
git clone [email protected]:NJUNLP/x-LLM.git
cd x-LLM
pip install --editable ./
```
For detailed information about the conda environment, refer to the environment.yml file.
## Usage
### Download Pre-trained LLM
Start by downloading the pre-trained LLM into the ./model directory.
### Download Dataset
You can download all the datasets used in this project from this [link](https://drive.google.com/file/d/1bkejieKDJFDJ45UmQYiY4eeqpGBwj-r-/view?usp=drive_link). Once downloaded, place the datasets in the ./data directory. The datasets include:
* Training dataset
* Alpaca
* Wikimatrix
* Newscommentary
* Evaluation dataset
* XQUAD
* MLQA
* Flores-101
* MI-Eval
### Load Raw Data Along with Instruction
You can load raw data along with instruction using the provided scripts (./data/<dataset>/<dataset.py>). If you want to use a new dataset, you need to implement the corresponding script. The loaded data will have the following structure:
``` python
datasets.Features(
{
"id": datasets.Value("string"),
"instruction": datasets.Value("string"),
"input": datasets.Value("string"),
"output": datasets.Value("string")
}
)
```
## Instruction-tune Pre-trained LLM
To instruction-tune the pre-trained LLM, run the train.sh script. For example, you can instruction-tune LLaMA-7B to x-LLaMA-7B (Chinese) with the following command:
``` bash
bash script/train.sh llama-7b-hf alpaca_en+alpaca_zh+translation_ncwm_en-zh
```
In this command, the first argument denotes the pre-trained LLM to use, and the second argument represents the training data to use. You can use + to concatenate multiple datasets, and the training data will be shuffled by the Huggingface Trainer.
Once the training is complete, the finetuned LLM will be saved in ./model/llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune. You can use aliases to define shorter names, and more details can be found in ./data/alias/alias.json.
## Test Finetuned LLM
To test the finetuned LLM, run the inference.sh script. For example, you can test the tuned LLM on the Flores dataset with the following command:
``` bash
bash script/inference.sh llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune translation_flores_en-zh
```
The output results will be saved in model/llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune/test/translation_flores_en-zh.inference.jsonl. The prediction field represents the generated content of the LLM.
## Interact with LLM Through Web UI
To interact with the LLM through a web UI, run app.py with the following command:
``` bash
bash app.py model/llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune
```
## Citation
If you find this repository helpful, please consider citing our paper:
```
@misc{zhu2023extrapolating,
title={Extrapolating Large Language Models to Non-English by Aligning Languages},
author={Wenhao Zhu and Yunzhe Lv and Qingxiu Dong and Fei Yuan and Jingjing Xu and Shujian Huang and Lingpeng Kong and Jiajun Chen and Lei Li},
year={2023},
eprint={2308.04948},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
YunzheLv/x-llama-tr-7b | YunzheLv | 2024-01-13T09:15:48Z | 54 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"tr",
"en",
"dataset:tatsu-lab/alpaca",
"dataset:news_commentary",
"arxiv:2308.04948",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-11-09T11:19:42Z | ---
license: apache-2.0
datasets:
- tatsu-lab/alpaca
- news_commentary
language:
- tr
- en
metrics:
- bleu
- bleurt
- comet
pipeline_tag: text-generation
---
# Extrapolating Large Language Models to Non-English by Aligning Languages
This repository contains the code implementation for the project that aims to empower pre-trained Large Language Models (LLMs) on non-English languages by building semantic alignment across languages. The project explores cross-lingual instruction-tuning and multilingual instruction-tuning techniques. The code implementation is based on [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca).

## Requirements and Installation
To install this repository, follow these steps:
```
git clone [email protected]:NJUNLP/x-LLM.git
cd x-LLM
pip install --editable ./
```
For detailed information about the conda environment, refer to the environment.yml file.
## Usage
### Download Pre-trained LLM
Start by downloading the pre-trained LLM into the ./model directory.
### Download Dataset
You can download all the datasets used in this project from this [link](https://drive.google.com/file/d/1bkejieKDJFDJ45UmQYiY4eeqpGBwj-r-/view?usp=drive_link). Once downloaded, place the datasets in the ./data directory. The datasets include:
* Training dataset
* Alpaca
* Wikimatrix
* Newscommentary
* Evaluation dataset
* XQUAD
* MLQA
* Flores-101
* MI-Eval
### Load Raw Data Along with Instruction
You can load raw data along with instruction using the provided scripts (./data/<dataset>/<dataset.py>). If you want to use a new dataset, you need to implement the corresponding script. The loaded data will have the following structure:
``` python
datasets.Features(
{
"id": datasets.Value("string"),
"instruction": datasets.Value("string"),
"input": datasets.Value("string"),
"output": datasets.Value("string")
}
)
```
## Instruction-tune Pre-trained LLM
To instruction-tune the pre-trained LLM, run the train.sh script. For example, you can instruction-tune LLaMA-7B to x-LLaMA-7B (Chinese) with the following command:
``` bash
bash script/train.sh llama-7b-hf alpaca_en+alpaca_zh+translation_ncwm_en-zh
```
In this command, the first argument denotes the pre-trained LLM to use, and the second argument represents the training data to use. You can use + to concatenate multiple datasets, and the training data will be shuffled by the Huggingface Trainer.
Once the training is complete, the finetuned LLM will be saved in ./model/llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune. You can use aliases to define shorter names, and more details can be found in ./data/alias/alias.json.
## Test Finetuned LLM
To test the finetuned LLM, run the inference.sh script. For example, you can test the tuned LLM on the Flores dataset with the following command:
``` bash
bash script/inference.sh llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune translation_flores_en-zh
```
The output results will be saved in model/llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune/test/translation_flores_en-zh.inference.jsonl. The prediction field represents the generated content of the LLM.
## Interact with LLM Through Web UI
To interact with the LLM through a web UI, run app.py with the following command:
``` bash
bash app.py model/llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune
```
## Citation
If you find this repository helpful, please consider citing our paper:
```
@misc{zhu2023extrapolating,
title={Extrapolating Large Language Models to Non-English by Aligning Languages},
author={Wenhao Zhu and Yunzhe Lv and Qingxiu Dong and Fei Yuan and Jingjing Xu and Shujian Huang and Lingpeng Kong and Jiajun Chen and Lei Li},
year={2023},
eprint={2308.04948},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
marcel/phixtral-4x2_8-gates-poc | marcel | 2024-01-13T09:10:40Z | 9 | 4 | transformers | [
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"moe",
"nlp",
"code",
"cognitivecomputations/dolphin-2_6-phi-2",
"lxuechen/phi-2-dpo",
"Yhyu13/phi-2-sft-dpo-gpt4_en-ep1",
"mrm8488/phi-2-coder",
"conversational",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-01-12T19:10:49Z | ---
inference: false
license: mit
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- moe
- nlp
- code
- cognitivecomputations/dolphin-2_6-phi-2
- lxuechen/phi-2-dpo
- Yhyu13/phi-2-sft-dpo-gpt4_en-ep1
- mrm8488/phi-2-coder
---

# phixtral-4x2_8-gates-poc
phixtral-4x2_8-gates-poc is [phixtral-4x2_8](https://huggingface.co/mlabonne/phixtral-4x2_8)
with finetuned gates for better selection of Expert and to break the symmetry.
As a POC we only used 400 shorter samples
from [openhermes](https://huggingface.co/datasets/teknium/openhermes).
phixtral-4x2_8 is the first Mixure of Experts (MoE) made with four [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) models, inspired by the [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) architecture. It performs better than each individual expert.
## 🏆 Evaluation
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|----------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[**phixtral-4x2_8**](https://huggingface.co/mlabonne/phixtral-4x2_8)| **33.91**| **70.44**| **48.78**| **37.68**| **47.7**|
|[dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)| 33.12| 69.85| 47.39| 37.2| 46.89|
|[phi-2-dpo](https://huggingface.co/lxuechen/phi-2-dpo)| 30.39| 71.68| 50.75| 34.9| 46.93|
|[phi-2-sft-dpo-gpt4_en-ep1](https://huggingface.co/Yhyu13/phi-2-sft-dpo-gpt4_en-ep1)| 30.61| 71.13| 48.74| 35.23| 46.43|
|[phi-2-coder](https://huggingface.co/mrm8488/phi-2-coder)| TBD| TBD| TBD| TBD| TBD|
|[phi-2](https://huggingface.co/microsoft/phi-2)| 27.98| 70.8| 44.43| 35.21| 44.61|
Check [YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard) to compare it with other models.
## 🧩 Configuration
The model has been made with a custom version of the [mergekit](https://github.com/cg123/mergekit) library (mixtral branch) and the following configuration:
```yaml
base_model: cognitivecomputations/dolphin-2_6-phi-2
gate_mode: cheap_embed
experts:
- source_model: cognitivecomputations/dolphin-2_6-phi-2
positive_prompts: [""]
- source_model: lxuechen/phi-2-dpo
positive_prompts: [""]
- source_model: Yhyu13/phi-2-sft-dpo-gpt4_en-ep1
positive_prompts: [""]
- source_model: mrm8488/phi-2-coder
positive_prompts: [""]
```
## 💻 Usage
Here's a [Colab notebook](https://colab.research.google.com/drive/1k6C_oJfEKUq0mtuWKisvoeMHxTcIxWRa?usp=sharing) to run Phixtral in 4-bit precision on a free T4 GPU.
```python
!pip install -q --upgrade transformers einops accelerate bitsandbytes
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "phixtral-4x2_8"
instruction = '''
def print_prime(n):
"""
Print all primes between 1 and n
"""
'''
torch.set_default_device("cuda")
# Load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
f"mlabonne/{model_name}",
torch_dtype="auto",
load_in_4bit=True,
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
f"mlabonne/{model_name}",
trust_remote_code=True
)
# Tokenize the input string
inputs = tokenizer(
instruction,
return_tensors="pt",
return_attention_mask=False
)
# Generate text using the model
outputs = model.generate(**inputs, max_length=200)
# Decode and print the output
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
Inspired by [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1), you can specify the `num_experts_per_tok` and `num_local_experts` in the [`config.json`](https://huggingface.co/mlabonne/phixtral-4x2_8/blob/main/config.json#L26-L27) file (2 and 4 by default). This configuration is automatically loaded in `configuration.py`.
[vince62s](https://huggingface.co/vince62s) implemented the MoE inference code in the `modeling_phi.py` file. In particular, see the [MoE class](https://huggingface.co/mlabonne/phixtral-4x2_8/blob/main/modeling_phi.py#L293-L317).
## 🤝 Acknowledgments
A special thanks to [vince62s](https://huggingface.co/vince62s) for the inference code and the dynamic configuration of the number of experts. He was very patient and helped me to debug everything.
Thanks to [Charles Goddard](https://github.com/cg123) for the [mergekit](https://github.com/cg123/mergekit) library and the implementation of the [MoE for clowns](https://goddard.blog/posts/clown-moe/).
Thanks to [ehartford](https://huggingface.co/ehartford), [lxuechen](https://huggingface.co/lxuechen), [Yhyu13](https://huggingface.co/Yhyu13), and [mrm8488](https://huggingface.co/mrm8488) for their fine-tuned phi-2 models.
|
yuntaeyang/SOLAR-10.7B-Instructlora_sftt-v1.0 | yuntaeyang | 2024-01-13T09:06:37Z | 2,224 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-13T05:33:21Z | ---
license: apache-2.0
language:
- ko
---
# Model Card for yuntaeyang/SOLAR-10.7B-Instructlora_sftt-v1.0
## Developed by : yuntaeyang(yonsei)
## Base Model : Upstage/SOLAR-10.7B-Instruct-v1.0
## 사용 데이터셋
- kyujinpy/KOR-OpenOrca-Platypus-v3
- 추가 데이터 없음 |
AIgroup-CVM-utokyohospital/TinyLlama-1.1B-ja | AIgroup-CVM-utokyohospital | 2024-01-13T08:54:38Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-01-13T08:34:22Z | ---
license: apache-2.0
---
# TinyLlama + Japanese
A continual pretraining model of TinyLlama 1.1B with a few Japanese texts.
### Base Model
[TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T)
### Tokenizers
(elyza/ELYZA-japanese-Llama-2-7b)[https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b]
### Training Dataset
Around 9B tokens in total.
- izumi-lab/wikipedia-ja-20230720
- if001/oscar_2023_filtered
### Validation Dataset
- izumi-lab/wikinews-ja-20230728
- izumi-lab/wikinews-en-20230728
- if001/aozorabunko-clean-sin
### Evaluation
We did not perform.
### Acknowledgement
We acknowledge those who prepared valuable datasets and [lit-gpt](https://github.com/Lightning-AI/lit-gpt).
|
A2H0H0R1/Qwen-7B-Chat-Int4-Qlora-biology-new-long | A2H0H0R1 | 2024-01-13T08:39:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-7B-Chat-Int4",
"base_model:adapter:Qwen/Qwen-7B-Chat-Int4",
"region:us"
] | null | 2024-01-13T08:26:36Z | ---
library_name: peft
base_model: Qwen/Qwen-7B-Chat-Int4
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
malhajar/meditron-70b-chat | malhajar | 2024-01-13T08:30:52Z | 16 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Medicine",
"en",
"dataset:malhajar/alpaca-gpt4-tr",
"base_model:epfl-llm/meditron-70b",
"base_model:finetune:epfl-llm/meditron-70b",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-13T08:02:53Z | ---
language:
- en
tags:
- Medicine
datasets:
- malhajar/alpaca-gpt4-tr
license: llama2
base_model: epfl-llm/meditron-70b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
meditron-7b-chat is a finetuned version of [`epfl-llm/meditron-70b`](https://huggingface.co/epfl-llm/meditron-70b) using SFT Training on the Alpaca Dataset.
This model can answer information about different excplicit ideas in medicine (see [`epfl-llm/meditron-70b`](https://huggingface.co/epfl-llm/meditron-70b) for more info)
### Model Description
- **Finetuned by:** [`Mohamad Alhajar`](https://www.linkedin.com/in/muhammet-alhajar/)
- **Language(s) (NLP):** English
- **Finetuned from model:** [`epfl-llm/meditron-70b`](https://huggingface.co/epfl-llm/meditron-70b)
### Prompt Template
```
### Instruction:
<prompt> (without the <>)
### Response:
```
## How to Get Started with the Model
Use the code sample provided in the original post to interact with the model.
```python
from transformers import AutoTokenizer,AutoModelForCausalLM
model_id = "malhajar/meditron-70b-chat"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
torch_dtype=torch.float16,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_id)
question: "what is tract infection?"
# For generating a response
prompt = '''
### Instruction:
{question}
### Response:'''
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
output = model.generate(inputs=input_ids,max_new_tokens=512,pad_token_id=tokenizer.eos_token_id,top_k=50, do_sample=True,
top_p=0.95)
response = tokenizer.decode(output[0])
print(response)
``` |
decruz07/kellemar-DPO-7B-v1.01 | decruz07 | 2024-01-13T08:07:54Z | 15 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:finetune:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-11T07:02:37Z | ---
base_model: teknium/OpenHermes-2.5-Mistral-7B
license: apache-2.0
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
---
# Model Card for decruz07/kellemar-DPO-7B-v1.01
<!-- Provide a quick summary of what the model is/does. -->
This model was created using OpenHermes-2.5 as the base, and finetuned with argilla/distilabel-intel-orca-dpo-pairs.
## Model Details
Finetuned with these specific parameters:
Steps: 200
Learning Rate: 5e5
Beta: 0.1
### Model Description
- **Developed by:** @decruz
- **Funded by [optional]:** my full-time job
- **Finetuned from model [optional]:** teknium/OpenHermes-2.5-Mistral-7B
## Benchmarks
**OpenLLM**
| Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|---|---|---|---|---|---|---|
| 68.32 | 65.78 | 85.04 | 63.24 | 55.54 | 78.69 | 61.64 |
**Nous**
| AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
|---|---|---|---|---|
| 43.17 | 73.25 | 55.87 | 42.2 |53.62 |
## Uses
You can use this for basic inference. You could probably finetune with this if you want to.
## How to Get Started with the Model
You can create a space out of this, or use basic python code to call the model directly and make inferences to it.
[More Information Needed]
## Training Details
The following was used:
`training_args = TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=200,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
# Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
max_prompt_length=1024,
max_length=1536,
)`
### Training Data
This was trained with https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs
### Training Procedure
Trained with Labonne's Google Colab Notebook on Finetuning Mistral 7B with DPO.
## Model Card Authors [optional]
@decruz
## Model Card Contact
@decruz on X/Twitter |
sur365/ppo-LunarLander-v2 | sur365 | 2024-01-13T07:53:03Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-13T07:52:38Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.16 +/- 18.75
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
GermanT5/t5-efficient-gc4-german-base-nl36 | GermanT5 | 2024-01-13T07:52:17Z | 42 | 5 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"german",
"deutsch",
"de",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-07T20:17:23Z | ---
language: de
license: mit
tags:
- german
- deutsch
---
# Creators
- [Stefan Schweter](https://github.com/stefan-it) ([schweter.ml](https://schweter.ml))
- [Philip May](https://may.la) ([Deutsche Telekom](https://www.telekom.de/))
- [Philipp Schmid](https://www.philschmid.de/) ([Hugging Face](https://huggingface.co/))
# Evaluation
Evaluation was done on a summarization task with:
- train data: [Swisstext](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html)
- test data: [MLSUM](https://huggingface.co/datasets/mlsum)
- GPUs: 4 (V100)
for details see: <https://github.com/GermanT5/german-t5-eval>
# Tips for training on GPUs
This model is too big to fit on a normal 16GB GPU in FP32 mode.
For various reasons, T5 models cannot be trained in FP16 mode.
However, mixed precision training is not yet supported on many GPUs.
For example, it does not work on V100 GPUs. On A100, however, it does.
That is why we suggest to use [DeepSpeed](https://github.com/microsoft/DeepSpeed) for training.
In particular, we recommend the [ZeRO-3 Example](https://huggingface.co/docs/transformers/main_classes/deepspeed#zero3-example) `auto` configuration.
> ZeRO-Offload pushes the boundary of the maximum model size that can be trained efficiently using minimal GPU resources, by exploiting computational and memory resources on both GPUs and their host CPUs.
see [ZeRO-Offload](https://www.deepspeed.ai/features/#zero-offload)
## License - The MIT License
Copyright 2022 Stefan Schweter<br>
Copyright 2022 Philip May<br>
Copyright 2022 Philipp Schmid
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
Subsets and Splits