modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-13 01:05:21
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 423
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-13 01:03:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Nexspear/392148a3-b359-4e57-9660-162c922d3eae | Nexspear | "2025-01-09T01:58:37Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-09T01:50:50Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 392148a3-b359-4e57-9660-162c922d3eae
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 90551035197c1c44_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/90551035197c1c44_train_data.json
type:
field_instruction: input_persona
field_output: prompt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Nexspear/392148a3-b359-4e57-9660-162c922d3eae
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/90551035197c1c44_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: leixa-personal
wandb_mode: online
wandb_name: 392148a3-b359-4e57-9660-162c922d3eae
wandb_project: Gradients-On-Four
wandb_run: your_name
wandb_runid: 392148a3-b359-4e57-9660-162c922d3eae
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 392148a3-b359-4e57-9660-162c922d3eae
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 0.4195 |
| 0.2614 | 0.0121 | 50 | 0.2717 |
| 0.2128 | 0.0242 | 100 | 0.2430 |
| 0.2071 | 0.0363 | 150 | 0.2383 |
| 0.2551 | 0.0484 | 200 | 0.2376 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
DBangshu/GPT2_4_0 | DBangshu | "2024-06-11T18:10:34Z" | 134 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-11T18:10:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jecp97/trial-ppo-LunarLander-v2 | jecp97 | "2022-05-08T20:28:36Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-05-08T16:22:10Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 206.72 +/- 58.57
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
kaantureyyen/deberta-blog-authorship-corpus-authorship-attribution | kaantureyyen | "2024-12-18T18:52:15Z" | 6 | 0 | null | [
"safetensors",
"deberta-v2",
"text-classification",
"en",
"dataset:barilan/blog_authorship_corpus",
"arxiv:2410.00751",
"base_model:microsoft/deberta-v3-small",
"base_model:finetune:microsoft/deberta-v3-small",
"region:us"
] | text-classification | "2024-12-18T15:42:23Z" | ---
datasets:
- barilan/blog_authorship_corpus
language:
- en
pipeline_tag: text-classification
base_model:
- microsoft/deberta-v3-small
---
DeBERTaV3 (small) finetuned on the Blog Authorship Corpus for authorship attribution with 10 authors using the `author10` dataset from:
Meisenbacher, Stephen, and Florian Matthes. "Thinking Outside of the Differential Privacy Box: A Case Study in Text Privatization with Language Model Prompting." arXiv preprint arXiv:2410.00751 (2024).
Found in: https://github.com/sjmeis/DPNONDP
```json
{
"epoch": 5.0,
"eval_accuracy": 0.639,
"eval_loss": 0.9551867842674255,
"eval_macro_f1": 0.6359876614042939,
"eval_macro_precision": 0.6469646011112227,
"eval_macro_recall": 0.639,
"eval_micro_f1": 0.639,
"eval_runtime": 282.9465,
"eval_samples_per_second": 7.068,
"eval_steps_per_second": 0.884,
"step": 1875
}
``` |
Cicciokr/XLM-Roberta-Base-Latin-Uncased | Cicciokr | "2025-01-14T13:45:49Z" | 127 | 0 | null | [
"safetensors",
"xlm-roberta",
"fill-mask",
"la",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:apache-2.0",
"region:us"
] | fill-mask | "2025-01-14T13:30:33Z" | ---
license: apache-2.0
language:
- la
metrics:
- accuracy
base_model:
- FacebookAI/xlm-roberta-base
pipeline_tag: fill-mask
---
XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
This model is fine tuned with The Latin Library - 15M Token
The dataset was cleaned:
- Removal of all "pseudo-Latin" text ("Lorem ipsum ...").
- Use of CLTK for sentence splitting and normalisation.
- deduplication of the corpus
- lowercase all text |
KingKazma/xsum_t5-small_p_tuning_500_10_3000_8_e6_s108_v4_l4_v100 | KingKazma | "2023-08-13T13:41:08Z" | 4 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-13T13:41:07Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
mlfoundations-dev/oh_teknium_scaling_down_random_0.5 | mlfoundations-dev | "2024-12-21T20:27:44Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-21T15:56:24Z" | ---
library_name: transformers
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: oh_teknium_scaling_down_random_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# oh_teknium_scaling_down_random_0.5
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the mlfoundations-dev/oh_teknium_scaling_down_random_0.5 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5327 | 0.9992 | 160 | 0.5309 |
| 0.4787 | 1.9984 | 320 | 0.5199 |
| 0.443 | 2.9977 | 480 | 0.5205 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
prxy5604/7de6180a-0ec6-41d9-9b27-bb18c6b240c3 | prxy5604 | "2025-01-14T17:21:24Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-2b",
"base_model:adapter:unsloth/codegemma-2b",
"license:apache-2.0",
"region:us"
] | null | "2025-01-14T17:01:06Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7de6180a-0ec6-41d9-9b27-bb18c6b240c3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-2b
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- dfe67401950e7525_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dfe67401950e7525_train_data.json
type:
field_input: boe_text_cleaned
field_instruction: text
field_output: tweet_text_cleaned
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/7de6180a-0ec6-41d9-9b27-bb18c6b240c3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/dfe67401950e7525_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7f736a86-e0db-463c-bdc6-381cbf4d05cb
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7f736a86-e0db-463c-bdc6-381cbf4d05cb
warmup_steps: 20
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7de6180a-0ec6-41d9-9b27-bb18c6b240c3
This model is a fine-tuned version of [unsloth/codegemma-2b](https://huggingface.co/unsloth/codegemma-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8578 | 0.0098 | 1 | 3.0226 |
| 0.0445 | 0.4878 | 50 | 1.4957 |
| 0.0364 | 0.9756 | 100 | 1.3523 |
| 0.0267 | 1.4634 | 150 | 1.2448 |
| 0.0034 | 1.9512 | 200 | 1.2186 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
devngho/gaenari-phi-4-pt-preview | devngho | "2025-03-28T11:04:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-28T10:57:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
digiplay/BlueberryMix_v1 | digiplay | "2024-03-12T19:50:49Z" | 474 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-03-12T18:15:19Z" | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/14323/blueberrymix
|
yujiepan/opt-350m-w8a8-unstructured90 | yujiepan | "2023-10-16T08:40:48Z" | 3 | 0 | transformers | [
"transformers",
"openvino",
"opt",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-10-12T12:06:08Z" | ---
pipeline_tag: text-generation
inference: true
widget:
- text: 'Hello!'
example_title: Hello world
group: Python
library_name: transformers
---
# yujiepan/opt-350m-w8a8-unstructured90
This model is w8a8 quantized & unstructually sparsified by OpenVINO, exported from [facebook/opt-350m](https://huggingface.co/facebook/opt-350m).
**This model is not tuned for accuracy.**
- Quantization: 8-bit symmetric for weights & activations
- Unstructured sparsity in transformer block linear layers: 90%
Codes for export: https://gist.github.com/yujiepan-work/1e6dd9f9c2aac0e9ecaf2ed4d82d1158
|
Glass-Shard/Llama-3-Open-Ko-88-ljh-gguf | Glass-Shard | "2024-07-14T10:45:54Z" | 6 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-14T10:39:25Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** Glass-Shard
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
asifahmed/open_llama_13b_NH | asifahmed | "2023-07-28T10:01:35Z" | 9 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"self-instruct",
"distillation",
"synthetic instruction",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-07-28T09:47:47Z" | ---
language:
- en
tags:
- llama-2
- self-instruct
- distillation
- synthetic instruction
license:
- mit
---
# Model Card: Nous-Hermes-Llama2-13b
Compute provided by our project sponsor Redmond AI, thank you! Follow RedmondAI on Twitter @RedmondAI.
## Model Description
Nous-Hermes-Llama2-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Emozilla leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors.
This Hermes model uses the exact same dataset as Hermes on Llama-1. This is to ensure consistency between the old Hermes and new, for anyone who wanted to keep Hermes as similar to the old one, just more capable.
This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine.
## Example Outputs:




## Model Training
The model was trained almost entirely on synthetic GPT-4 outputs. Curating high quality GPT-4 datasets enables incredibly high quality in knowledge, task completion, and style.
This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), and several others, detailed further below
## Collaborators
The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Emozilla, Huemin Art, and Redmond AI.
Special mention goes to @winglian for assisting in some of the training issues.
Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly.
Among the contributors of datasets:
- GPTeacher was made available by Teknium
- Wizard LM by nlpxucan
- Nous Research Instruct Dataset was provided by Karan4D and HueminArt.
- GPT4-LLM and Unnatural Instructions were provided by Microsoft
- Airoboros dataset by jondurbin
- Camel-AI's domain expert datasets are from Camel-AI
- CodeAlpaca dataset by Sahil 2801.
If anyone was left out, please open a thread in the community tab.
## Prompt Format
The model follows the Alpaca prompt format:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
or
```
### Instruction:
<prompt>
### Input:
<additional context>
### Response:
<leave a newline blank for model to respond>
```
## Benchmark Results
AGI-Eval
```
| Task |Version| Metric |Value | |Stderr|
|agieval_aqua_rat | 0|acc |0.2362|± |0.0267|
| | |acc_norm|0.2480|± |0.0272|
|agieval_logiqa_en | 0|acc |0.3425|± |0.0186|
| | |acc_norm|0.3472|± |0.0187|
|agieval_lsat_ar | 0|acc |0.2522|± |0.0287|
| | |acc_norm|0.2087|± |0.0269|
|agieval_lsat_lr | 0|acc |0.3510|± |0.0212|
| | |acc_norm|0.3627|± |0.0213|
|agieval_lsat_rc | 0|acc |0.4647|± |0.0305|
| | |acc_norm|0.4424|± |0.0303|
|agieval_sat_en | 0|acc |0.6602|± |0.0331|
| | |acc_norm|0.6165|± |0.0340|
|agieval_sat_en_without_passage| 0|acc |0.4320|± |0.0346|
| | |acc_norm|0.4272|± |0.0345|
|agieval_sat_math | 0|acc |0.2909|± |0.0307|
| | |acc_norm|0.2727|± |0.0301|
```
GPT-4All Benchmark Set
```
| Task |Version| Metric |Value | |Stderr|
|arc_challenge| 0|acc |0.5102|± |0.0146|
| | |acc_norm|0.5213|± |0.0146|
|arc_easy | 0|acc |0.7959|± |0.0083|
| | |acc_norm|0.7567|± |0.0088|
|boolq | 1|acc |0.8394|± |0.0064|
|hellaswag | 0|acc |0.6164|± |0.0049|
| | |acc_norm|0.8009|± |0.0040|
|openbookqa | 0|acc |0.3580|± |0.0215|
| | |acc_norm|0.4620|± |0.0223|
|piqa | 0|acc |0.7992|± |0.0093|
| | |acc_norm|0.8069|± |0.0092|
|winogrande | 0|acc |0.7127|± |0.0127|
```
BigBench Reasoning Test
```
| Task |Version| Metric |Value | |Stderr|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5526|± |0.0362|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7344|± |0.0230|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.2636|± |0.0275|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.0195|± |0.0073|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2760|± |0.0200|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2100|± |0.0154|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4400|± |0.0287|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.2440|± |0.0192|
|bigbench_navigate | 0|multiple_choice_grade|0.4950|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.5570|± |0.0111|
|bigbench_ruin_names | 0|multiple_choice_grade|0.3728|± |0.0229|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1854|± |0.0123|
|bigbench_snarks | 0|multiple_choice_grade|0.6298|± |0.0360|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6156|± |0.0155|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3140|± |0.0147|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2032|± |0.0114|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1406|± |0.0083|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4400|± |0.0287|
```
These are the highest benchmarks Hermes has seen on every metric, achieving the following average scores:
- GPT4All benchmark average is now 70.0 - from 68.8 in Hermes-Llama1
- 0.3657 on BigBench, up from 0.328 on hermes-llama1
- 0.372 on AGIEval, up from 0.354 on Hermes-llama1
These benchmarks currently have us at #1 on ARC-c, ARC-e, Hellaswag, and OpenBookQA, and 2nd place on Winogrande, comparing to GPT4all's benchmarking list, supplanting Hermes 1 for the new top position.
## Resources for Applied Use Cases:
For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord
For an example of a roleplaying discord chatbot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot
## Future Plans
We plan to continue to iterate on both more high quality data, and new data filtering techniques to eliminate lower quality data going forward.
## Model Usage
The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
HPAI-BSC/Bony | HPAI-BSC | "2025-02-12T08:47:25Z" | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | "2025-01-13T10:02:08Z" | ---
license: cc-by-nc-sa-4.0
---
# Bony & BonyWave Model Card
Self-Supervised Vision Transformers for Prostate Histopathology Analysis
Medium article: https://hpai-bsc.medium.com/medium-article-bony-744fa41b452d
## Model Overview
This repository hosts two variants of the XCiT-medium model trained for prostate histopathology image analysis:
* Bony: Baseline XCiT model pre-trained with DINO.
* BonyWave: Enhanced variant incorporating 3D wavelet decomposition for improved feature extraction.
Both models process 224×224 RGB tiles and were trained on 2.8M image tiles from the PANDA dataset using 24× NVIDIA H100 GPUs.
## Model Description
This XCiT (medium) model has been trained (from scratch) for prostate histopathology image analysis tasks, using images of size `224 × 224` pixels and 24 GPU H100. The XCiT architecture is a transformer model that uses cross-attention to process images, thereby improving performance compared to traditional CNN architectures. It was pre-trained on a large dataset using the DINO self-supervised training method.
This model is designed as an encoder on top of which decoders can be applied for downstream tasks. It has been tested on various tasks such as classification and segmentation (see the benchmarks used for evaluation).
---
## Objective and Application Domain
This model was developed for the detection and classification of histopathological features in prostate biopsy images. It can be used for:
- Detection of prostate tumors and other anomalies.
- AI-assisted diagnosis for pathologists.
Specific tasks include cell segmentation and identifying relevant features for prostate histological classification.
---
## Architecture
This medium XCiT model relies on transformer blocks, which are better suited for computer vision tasks due to their ability to capture complex spatial relationships. The architecture has been adapted to work with prostate histopathology images of size `224 × 224`. The total number of parameters in this model is **84M**.
### Technical Details
The XCiT model is trained using the DINO framework, a self-supervised training framework that uses a discriminative objective to learn representations without explicit supervision. The XCiT architecture combines the advantages of transformers while using an efficient attention mechanism to handle the high-dimensional nature of histopathology images.
The loss function used during pre-training is defined as:
$$
L_{DINO} = - \sum_{i} p(t_i | \theta) \log q(s_i | \phi)
$$
where \( p(t_i | \theta) \) is the target distribution (*t* for teacher) and \( q(s_i | \phi) \) is the student distribution.
## Pre-training with DINO
The model was pre-trained using the **DINO** method, a self-supervised pre-training algorithm based on a contrastive objective where the model learns to maximize similarity between augmented views of the same image. This pre-training is performed without any labels, using only histopathology images. The model has been trained on **2.8 million image tiles** (`224 × 224`).
### Training Procedure
The model was trained with an adaptive learning rate of **0.00075** in the beginning, using the Adam optimizer. The pre-training was conducted on a prostate histopathology image dataset (**PANDA dataset**), with images of size `224 × 224` pixels cropped without overlap from the PANDA TIFF images (high-dimensional images).
Here are all the hyperparameters:
- **Architecture**: XCiT_medium
- **Patch size**: 16
- **Drop path rate**: 0.1
- **Output dimension (out_dim)**: 4096
- **Number of local crops**: 5
- **Teacher temperature (teacher_temp)**: 0.07
- **Teacher temperature during warmup (warmup_teacher_temp)**: 0.04
- **Warmup epochs for teacher**: 10
- **Training epochs**: 15
- **Learning rate (lr)**: 0.00075
- **Minimum learning rate (min_lr)**: 2e-06
- **Warmup epochs for learning rate**: 10
- **Batch size per GPU**: 64
- **Weight decay**: 0.05
- **Weight decay at the end of training (weight_decay_end)**: 0.4
- **Teacher momentum**: 0.996
- **Clip gradient**: 3.0
- **Batch size for DataLoader**: 64
- **Parameter norms**: None (`param_norms = None`)
- **Freeze last layer**: Yes (`freeze_last_layer = 1`)
- **Use FP16 scaler**: Yes (`fp16_scaler_b = True`)
- **Number of workers**: 10
- **Global crops scale (global_crops_scale)**: (0.25, 1.0)
- **Local crops scale (local_crops_scale)**: (0.05, 0.25)
- **Distribution URL**: `"env://"`
## Performance
The model achieved a classification accuracy of **81%** on the PANDA subset and a segmentation performance of **2.9e-6** (with MSE) on the DeepGleason prostate histopathology dataset. It was also tested on the SICAPv2 benchmark. The model’s performance was compared to other models, such as **Hibou**, a ViT model trained on **1.2 billion tiles** of `224 × 224`. For DeepGleason and SICAPv2, segmentation has been performed using the **Mean Squared Error (MSE)**. The summary table is as follows:
| Model | PANDA test subset (Accuracy) ↑ | DeepGleason (MSE) ↓ | SICAPv2 (MSE) ↓ |
|------------------|---------------------------------|---------------------|-----------------|
| **Bony** | 81.2% | 2.934e-06 | 8.0e-04 |
| **BonyWave** | 83.0% | 3.9e-04 | **7.9e-04** |
| **Hibou** | **83.1%** | 1.455e-06 | 0.10 |
| **Histoencoder** | 81.6% | **1.003e-06** | - |
## Wavelet Decomposition
As previously mentioned, histopathology images are highly discontinuous, noisy, and often visually similar. Therefore, applying a filter to these images might help abstract their information, enabling more stable and potentially more effective training. This is why I believe that incorporating wavelet decomposition before the forward pass in our XCiT model could be a promising approach.
### Overview of 3D Wavelet Decomposition
Wavelets are oscillating functions localized in time and space, used to decompose a signal \( f(x, y, z) \) into multiple scales and orientations. 3D wavelet decomposition is a method well-suited for analyzing volumetric data, such as \(224 \times 224 \times 3\) images, by extracting localized information at different spatial scales.
We conducted small-scale experiments using Haar wavelets, considering a single decomposition scale and focusing on the "Approximation" of the image. Despite these limitations, training revealed some potential. We tested this idea on the PANDA subset benchmark and **Bony_wave** achieved a 83% accuracy on the test. For more details see https://hpai-bsc.medium.com/medium-article-bony-744fa41b452d
## Limitations and Biases
Although this model was trained for a specific prostate histopathology analysis task, there are several limitations and biases:
- Performance may be affected by the quality of input images, particularly in cases of low resolution or noise.
- The model may be biased by the distribution of the training data, which may not be representative of all patient populations.
- The model may struggle with images containing artifacts or specific conditions not encountered in the training dataset.
- This model may not be used for images other than **prostate histopathology** images as it has only been trained on this kind of data.
- This model shall not be used for diagnosis alone.
# About
Main model developed and trained by [Emile Vaysse](https://huggingface.co/emilioHugging), under the supervision of [Dario Garcia-Gasulla](https://huggingface.co/dariog).
For more details, see the full [thesis report](https://hpai.bsc.es/files/Rapport_PFE.pdf) (in french).
|
furrutiav/roberta_mixtral_nllfg_vanilla_qnli_none_naive | furrutiav | "2024-12-03T18:25:27Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-12-03T18:24:47Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
1231czx/7b_code_gemma_3epoch | 1231czx | "2024-07-06T13:19:08Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-06T13:16:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Tristan/dclm-1b-raw-finetune-correct | Tristan | "2025-03-28T00:32:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-28T00:29:38Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/tsubaki-mix-v15-sdxl | John6666 | "2024-07-17T06:56:54Z" | 10,083 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"base_model:Kotajiro/tsubaki_mix",
"base_model:finetune:Kotajiro/tsubaki_mix",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-07-16T13:17:04Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
base_model: Kotajiro/tsubaki_mix
---
Original model is [here](https://civitai.com/models/455220?modelVersionId=649263).
|
mradermacher/Medusa-1.3-L2-7B-GGUF | mradermacher | "2024-06-04T22:17:56Z" | 4 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/Medusa-1.3-L2-7B",
"base_model:quantized:Sao10K/Medusa-1.3-L2-7B",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | "2024-06-04T14:49:34Z" | ---
base_model: Sao10K/Medusa-1.3-L2-7B
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Sao10K/Medusa-1.3-L2-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Medusa-1.3-L2-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Medusa-1.3-L2-7B-GGUF/resolve/main/Medusa-1.3-L2-7B.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Medusa-1.3-L2-7B-GGUF/resolve/main/Medusa-1.3-L2-7B.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Medusa-1.3-L2-7B-GGUF/resolve/main/Medusa-1.3-L2-7B.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Medusa-1.3-L2-7B-GGUF/resolve/main/Medusa-1.3-L2-7B.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Medusa-1.3-L2-7B-GGUF/resolve/main/Medusa-1.3-L2-7B.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Medusa-1.3-L2-7B-GGUF/resolve/main/Medusa-1.3-L2-7B.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Medusa-1.3-L2-7B-GGUF/resolve/main/Medusa-1.3-L2-7B.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Medusa-1.3-L2-7B-GGUF/resolve/main/Medusa-1.3-L2-7B.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Medusa-1.3-L2-7B-GGUF/resolve/main/Medusa-1.3-L2-7B.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Medusa-1.3-L2-7B-GGUF/resolve/main/Medusa-1.3-L2-7B.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Medusa-1.3-L2-7B-GGUF/resolve/main/Medusa-1.3-L2-7B.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Medusa-1.3-L2-7B-GGUF/resolve/main/Medusa-1.3-L2-7B.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Medusa-1.3-L2-7B-GGUF/resolve/main/Medusa-1.3-L2-7B.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Medusa-1.3-L2-7B-GGUF/resolve/main/Medusa-1.3-L2-7B.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Medusa-1.3-L2-7B-GGUF/resolve/main/Medusa-1.3-L2-7B.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AlexSo79/super-cool-model | AlexSo79 | "2024-03-08T15:46:44Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-03-08T14:40:07Z" | # Project Name
## Description
[Project Name] is a [brief description of the project]. This README provides an overview of the project, its features, installation instructions, and usage guidelines.
## Features
- Feature 1
- Feature 2
- Feature 3
## Installation
To install [Project Name], follow these steps:
1. Clone the repository: `git clone https://github.com/your_username/your_project.git`
2. Navigate to the project directory: `cd your_project`
3. Install dependencies: `npm install`
## Usage
To use [Project Name], follow these steps:
1. Configure the settings by modifying the `config.js` file.
2. Run the application: `node app.js`
3. Open your web browser and navigate to `http://localhost:3000` to access the application.
## Contributing
Contributions are welcome! To contribute to [Project Name], follow these steps:
1. Fork the repository
2. Create a new branch: `git checkout -b feature_branch`
3. Make your changes and commit them: `git commit -m 'Add new feature'`
4. Push to the branch: `git push origin feature_branch`
5. Submit a pull request
## License
This project is licensed under the [License Name] License - see the [LICENSE.md](LICENSE.md) file for details.
## Contact
For questions or support, please contact [Your Name] at [[email protected]].
|
MindNetML/dqn-SpaceInvadersNoFrameskip-v4 | MindNetML | "2023-06-24T23:09:09Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-06-24T23:08:32Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 572.50 +/- 179.80
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MindNetML -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MindNetML -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MindNetML
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 3),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
JacksonBrune/1dccde87-f123-4e7a-8f0d-65ba8b3e3b4c | JacksonBrune | "2025-01-20T06:55:14Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-1.1-2b-it",
"base_model:adapter:unsloth/gemma-1.1-2b-it",
"license:apache-2.0",
"region:us"
] | null | "2025-01-20T06:54:34Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-1.1-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1dccde87-f123-4e7a-8f0d-65ba8b3e3b4c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-1.1-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2dc8a857d26b9d3e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2dc8a857d26b9d3e_train_data.json
type:
field_input: type
field_instruction: problem
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/1dccde87-f123-4e7a-8f0d-65ba8b3e3b4c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/2dc8a857d26b9d3e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0949b8ab-2916-4c09-9887-756ceeb6089c
wandb_project: birthdya-sn56-18-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0949b8ab-2916-4c09-9887-756ceeb6089c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1dccde87-f123-4e7a-8f0d-65ba8b3e3b4c
This model is a fine-tuned version of [unsloth/gemma-1.1-2b-it](https://huggingface.co/unsloth/gemma-1.1-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5167 | 0.0040 | 1 | 1.9321 |
| 2.0491 | 0.0120 | 3 | 1.9195 |
| 1.7849 | 0.0240 | 6 | 1.7230 |
| 1.2944 | 0.0361 | 9 | 1.3552 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/lua-stories-slerp-mistral-2L-tiny-GGUF | mradermacher | "2024-12-20T12:55:44Z" | 10 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:nilq/lua-stories-slerp-mistral-2L-tiny",
"base_model:quantized:nilq/lua-stories-slerp-mistral-2L-tiny",
"endpoints_compatible",
"region:us"
] | null | "2024-12-20T12:54:43Z" | ---
base_model: nilq/lua-stories-slerp-mistral-2L-tiny
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/nilq/lua-stories-slerp-mistral-2L-tiny
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/lua-stories-slerp-mistral-2L-tiny-GGUF/resolve/main/lua-stories-slerp-mistral-2L-tiny.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/lua-stories-slerp-mistral-2L-tiny-GGUF/resolve/main/lua-stories-slerp-mistral-2L-tiny.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/lua-stories-slerp-mistral-2L-tiny-GGUF/resolve/main/lua-stories-slerp-mistral-2L-tiny.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/lua-stories-slerp-mistral-2L-tiny-GGUF/resolve/main/lua-stories-slerp-mistral-2L-tiny.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/lua-stories-slerp-mistral-2L-tiny-GGUF/resolve/main/lua-stories-slerp-mistral-2L-tiny.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/lua-stories-slerp-mistral-2L-tiny-GGUF/resolve/main/lua-stories-slerp-mistral-2L-tiny.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lua-stories-slerp-mistral-2L-tiny-GGUF/resolve/main/lua-stories-slerp-mistral-2L-tiny.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lua-stories-slerp-mistral-2L-tiny-GGUF/resolve/main/lua-stories-slerp-mistral-2L-tiny.Q5_K_S.gguf) | Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/lua-stories-slerp-mistral-2L-tiny-GGUF/resolve/main/lua-stories-slerp-mistral-2L-tiny.Q5_K_M.gguf) | Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/lua-stories-slerp-mistral-2L-tiny-GGUF/resolve/main/lua-stories-slerp-mistral-2L-tiny.Q6_K.gguf) | Q6_K | 0.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/lua-stories-slerp-mistral-2L-tiny-GGUF/resolve/main/lua-stories-slerp-mistral-2L-tiny.Q8_0.gguf) | Q8_0 | 0.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/lua-stories-slerp-mistral-2L-tiny-GGUF/resolve/main/lua-stories-slerp-mistral-2L-tiny.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
hyper-accel/tiny-random-phi | hyper-accel | "2025-02-10T06:03:36Z" | 136 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-10T06:03:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TheBloke/leo-hessianai-13B-chat-bilingual-GGUF | TheBloke | "2023-09-28T11:11:33Z" | 288 | 6 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"de",
"dataset:LeoLM/OpenSchnabeltier",
"dataset:OpenAssistant/OASST-DE",
"dataset:FreedomIntelligence/alpaca-gpt4-deutsch",
"dataset:FreedomIntelligence/evol-instruct-deutsch",
"dataset:LeoLM/German_Poems",
"dataset:LeoLM/German_Songs",
"dataset:garage-bAInd/Open-Platypus",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:bjoernp/oasst25-08-23-filtered",
"base_model:LeoLM/leo-hessianai-13b-chat-bilingual",
"base_model:quantized:LeoLM/leo-hessianai-13b-chat-bilingual",
"license:llama2",
"region:us"
] | text-generation | "2023-09-28T10:56:39Z" | ---
base_model: LeoLM/leo-hessianai-13b-chat-bilingual
datasets:
- LeoLM/OpenSchnabeltier
- OpenAssistant/OASST-DE
- FreedomIntelligence/alpaca-gpt4-deutsch
- FreedomIntelligence/evol-instruct-deutsch
- LeoLM/German_Poems
- LeoLM/German_Songs
- garage-bAInd/Open-Platypus
- WizardLM/WizardLM_evol_instruct_70k
- bjoernp/oasst25-08-23-filtered
inference: false
language:
- en
- de
library_name: transformers
license: llama2
model_creator: LAION LeoLM
model_name: Leo Hessianai 13B Chat Bilingual
model_type: llama
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Leo Hessianai 13B Chat Bilingual - GGUF
- Model creator: [LAION LeoLM](https://huggingface.co/LeoLM)
- Original model: [Leo Hessianai 13B Chat Bilingual](https://huggingface.co/LeoLM/leo-hessianai-13b-chat-bilingual)
<!-- description start -->
## Description
This repo contains GGUF format model files for [LAION LeoLM's Leo Hessianai 13B Chat Bilingual](https://huggingface.co/LeoLM/leo-hessianai-13b-chat-bilingual).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF)
* [LAION LeoLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/LeoLM/leo-hessianai-13b-chat-bilingual)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [leo-hessianai-13b-chat-bilingual.Q2_K.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [leo-hessianai-13b-chat-bilingual.Q3_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [leo-hessianai-13b-chat-bilingual.Q3_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [leo-hessianai-13b-chat-bilingual.Q3_K_L.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [leo-hessianai-13b-chat-bilingual.Q4_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [leo-hessianai-13b-chat-bilingual.Q4_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q4_K_S.gguf) | Q4_K_S | 4 | 7.42 GB| 9.92 GB | small, greater quality loss |
| [leo-hessianai-13b-chat-bilingual.Q4_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [leo-hessianai-13b-chat-bilingual.Q5_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [leo-hessianai-13b-chat-bilingual.Q5_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [leo-hessianai-13b-chat-bilingual.Q5_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [leo-hessianai-13b-chat-bilingual.Q6_K.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [leo-hessianai-13b-chat-bilingual.Q8_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/leo-hessianai-13B-chat-bilingual-GGUF and below it, a specific filename to download, such as: leo-hessianai-13b-chat-bilingual.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/leo-hessianai-13B-chat-bilingual-GGUF leo-hessianai-13b-chat-bilingual.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/leo-hessianai-13B-chat-bilingual-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/leo-hessianai-13B-chat-bilingual-GGUF leo-hessianai-13b-chat-bilingual.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m leo-hessianai-13b-chat-bilingual.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/leo-hessianai-13B-chat-bilingual-GGUF", model_file="leo-hessianai-13b-chat-bilingual.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: LAION LeoLM's Leo Hessianai 13B Chat Bilingual
# LAION LeoLM: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2.
Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
Thanks to a compute grant at HessianAI's new supercomputer **42**, we release two foundation models trained with 8k context length,
[`LeoLM/leo-hessianai-7b`](https://huggingface.co/LeoLM/leo-hessianai-7b) and [`LeoLM/leo-hessianai-13b`](https://huggingface.co/LeoLM/leo-hessianai-13b) under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) (70b also coming soon! 👀).
With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
Read our [blog post]() or our paper (preprint coming soon) for more details!
*A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
## LeoLM Chat
`LeoLM/leo-hessianai-13b-chat-bilingual` is a bilingual English-German chat model built on our foundation model `LeoLM/leo-hessianai-13b` and finetuned on a selection of German translateed instruction datasets and their English counterparts.
The model performs exceptionally well on writing, explanation and discussion tasks but struggles somewhat with math and advanced reasoning. See our MT-Bench scores:
```
{
"first_turn": 6.13125,
"second_turn": 4.88125,
"categories": {
"writing": 6.75,
"roleplay": 5.55,
"reasoning": 3.3,
"math": 2.25,
"coding": 3.9,
"extraction": 5.8,
"stem": 7.55,
"humanities": 8.95
},
"average": 5.50625
}
```
## Model Details
- **Finetuned from:** [LeoLM/leo-hessianai-13b](https://huggingface.co/LeoLM/leo-hessianai-13b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English and German
- **Demo:** [Web Demo]()
- **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
- **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Björn Plüster](mailto:[email protected])
## Use in 🤗Transformers
First install direct dependencies:
```
pip install transformers torch sentencepiece
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn==v2.1.1 --no-build-isolation
pip install git+https://github.com/HazyResearch/[email protected]#subdirectory=csrc/rotary
```
Then load the model in transformers:
```python
from transformers import pipeline
import torch
system_prompt = """<|im_start|>system
Dies ist eine Unterhaltung zwischen einem intelligenten, hilfsbereitem KI-Assistenten und einem Nutzer.
Der Assistent gibt ausführliche, hilfreiche und ehrliche Antworten.<|im_end|>
"""
prompt_format = "<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
prompt = "Erkläre mir wie die Fahrradwegesituation in Hamburg ist."
generator = pipeline(model="LeoLM/leo-hessianai-13b-chat-bilingual", device="cuda", torch_dtype=torch.float16, trust_remote_code=True) # True for flash-attn2 else False
print(generator(prompt_format.format(prompt=prompt), do_sample=True, top_p=0.95, max_length=8192))
```
"*Als KI kann ich keine persönlichen Beobachtungen teilen, aber ich kann einige allgemeine Informationen zur Fahrradwegesituation in Hamburg liefern. Im Vergleich zu vielen anderen großen Städten hat Hamburg eine hohe Anzahl von Fahrradfahrern und nimmt seine Verantwortung für nachhaltige Verkehrsmittel sehr ernst. Es gibt viele Fahrradwege und separate Fahrspuren, die Radfahrern ermöglichen, zusammen mit dem Autoverkehr zu fahren. Diese Fahrradspuren sind oft mit Markierungen gekennzeichnet und durch physische Trennungen von anderen Fahrspuren abgegrenzt. Darüber hinaus gibt es viele Fahrradstraßen, auf denen Radfahrer Vorfahrt haben und Autos langsamer fahren müssen.*
*In einigen städtischen Gebieten können Fahrradwege jedoch eng oder überfüllt sein, besonders während der Stoßzeiten. Es gibt auch viele Kreuzungen, an denen Radfahrer anhalten und auf Grün warten müssen, ähnlich wie Autofahrer. Insgesamt ist die Fahrradinfrastruktur in Hamburg ziemlich gut, aber wie überall gibt es immer Raum für Verbesserungen.*"
## Prompting / Prompt Template
Prompt dialogue template (ChatML format):
```
"""
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
"""
```
The model input can contain multiple conversation turns between user and assistant, e.g.
```
<|im_start|>user
{prompt 1}<|im_end|>
<|im_start|>assistant
{reply 1}<|im_end|>
<|im_start|>user
{prompt 2}<|im_end|>
<|im_start|>assistant
(...)
```
## Ethical Considerations and Limitations
LeoLM has been tested in English and German, and has not covered, nor could it cover all scenarios.
For these reasons, as with all LLMs, the potential outputs of `LeoLM/leo-hessianai-7b-chat` cannot be predicted
in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses
to user prompts. Therefore, before deploying any applications of `LeoLM/leo-hessianai-7b-chat`, developers should
perform safety testing and tuning tailored to their specific applications of the model.
Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
## Finetuning Details
| Hyperparameter | Value |
|---|---|
| Num epochs | 3 |
| Examples per epoch | 233275 |
| Global batch size | 256 |
| Learning rate | 3e-5 |
| Warmup steps | 100 |
| LR scheduler | Cosine |
| Adam betas | (0.9, 0.95) |
| Weight decay | 0.001 |
## Dataset Details
```
## Stats for 'Subset of LeoLM/OpenSchnabeltier' (21314 samples (100.0%))
-----------------
Accepted: 21314/21314 (100.0%)
Accepted tokens: 8134690
Skipped: 0 (0.0%)
Min tokens per sample: 25
Max tokens per sample: 1202
Avg tokens per sample: 381.65947264708643
-----------------
## Stats for 'Subset of garage-bAInd/Open-Platypus' (24427 samples (100.0%))
-----------------
Accepted: 24427/24427 (100.0%)
Accepted tokens: 9549043
Skipped: 0 (0.0%)
Min tokens per sample: 23
Max tokens per sample: 5054
Avg tokens per sample: 390.9216440823679
-----------------
## Stats for 'Subset of WizardLM/WizardLM_evol_instruct_70k' (68600 samples (100.0%))
-----------------
Accepted: 68600/68600 (100.0%)
Accepted tokens: 33045040
Skipped: 0 (0.0%)
Min tokens per sample: 18
Max tokens per sample: 11810
Avg tokens per sample: 481.7061224489796
-----------------
## Stats for 'Subset of FreedomIntelligence/evol-instruct-deutsch' (57841 samples (100.0%))
-----------------
Accepted: 57841/57841 (100.0%)
Accepted tokens: 42958192
Skipped: 0 (0.0%)
Min tokens per sample: 33
Max tokens per sample: 5507
Avg tokens per sample: 742.6944900675991
-----------------
## Stats for 'Subset of FreedomIntelligence/alpaca-gpt4-deutsch' (48969 samples (100.0%))
-----------------
Accepted: 48969/48969 (100.0%)
Accepted tokens: 13372005
Skipped: 0 (0.0%)
Min tokens per sample: 19
Max tokens per sample: 1359
Avg tokens per sample: 273.07082031489307
-----------------
## Stats for 'Subset of LeoLM/German_Songs' (490 samples (100.0%))
-----------------
Accepted: 490/490 (100.0%)
Accepted tokens: 618642
Skipped: 0 (0.0%)
Min tokens per sample: 747
Max tokens per sample: 1678
Avg tokens per sample: 1262.534693877551
-----------------
## Stats for 'Subset of LeoLM/German_Poems' (392 samples (100.0%))
-----------------
Accepted: 392/392 (100.0%)
Accepted tokens: 187897
Skipped: 0 (0.0%)
Min tokens per sample: 231
Max tokens per sample: 826
Avg tokens per sample: 479.3290816326531
-----------------
## Stats for 'Subset of OpenAssistant/OASST_DE' (3646 samples (100.0%))
-----------------
Accepted: 3646/3646 (100.0%)
Accepted tokens: 2338738
Skipped: 0 (0.0%)
Min tokens per sample: 29
Max tokens per sample: 2484
Avg tokens per sample: 641.4530992868897
-----------------
## Stats for 'Subset of bjoernp/oasst25-08-23-filtered' (8922 samples (100.0%))
-----------------
Accepted: 8922/8922 (100.0%)
Accepted tokens: 4526427
Skipped: 0 (0.0%)
Min tokens per sample: 23
Max tokens per sample: 5407
Avg tokens per sample: 507.3332212508406
-----------------
## Stats for 'total' (235632 samples (100.0%))
-----------------
Accepted: 235632/235632 (100.0%)
Accepted tokens: 115862397
Skipped: 0 (0.0%)
Min tokens per sample: 18
Max tokens per sample: 11810
Avg tokens per sample: 491.70909299246284
-----------------
```
<!-- original-model-card end -->
|
0xid/Reinforce-Pixelcopter-PLE-v0 | 0xid | "2023-01-05T16:04:12Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-05T16:04:02Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 55.60 +/- 41.02
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Ap98/rl-course-67af61f8a734193799942967 | Ap98 | "2025-02-14T15:33:46Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2025-02-14T15:33:24Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.24 +/- 18.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HArmonizedSS/HASS-LLaMA3-Instruct-70B | HArmonizedSS | "2025-03-13T08:02:19Z" | 0 | 0 | null | [
"pytorch",
"llama",
"license:apache-2.0",
"region:us"
] | null | "2025-03-13T06:44:41Z" | ---
license: apache-2.0
---
|
RogerB/afro-xlmr-large-kinre-finetuned-kin-sent3 | RogerB | "2023-10-09T15:18:14Z" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:RogerB/afro-xlmr-large-kinre-finetuned",
"base_model:finetune:RogerB/afro-xlmr-large-kinre-finetuned",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-10-09T14:54:51Z" | ---
license: mit
base_model: RogerB/afro-xlmr-large-kinre-finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: afro-xlmr-large-kinre-finetuned-kin-sent3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-large-kinre-finetuned-kin-sent3
This model is a fine-tuned version of [RogerB/afro-xlmr-large-kinre-finetuned](https://huggingface.co/RogerB/afro-xlmr-large-kinre-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8196
- F1: 0.6813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 10000000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9842 | 1.0 | 1013 | 0.7321 | 0.6975 |
| 0.7881 | 2.0 | 2026 | 0.6053 | 0.7562 |
| 0.6972 | 3.0 | 3039 | 0.5805 | 0.7782 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
kartikgupta373/as15664-508913-pastel-green | kartikgupta373 | "2025-01-29T06:31:16Z" | 14 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-29T06:31:15Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# As15664 508913 Pastel Green
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kartikgupta373/as15664-508913-pastel-green', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
zzoming/gemma_27b_model_16bit | zzoming | "2025-02-25T11:55:06Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-27b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-27b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-25T11:54:59Z" | ---
base_model: unsloth/gemma-2-27b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** zzoming
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-27b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
urchade/gliner_multi | urchade | "2024-04-10T10:13:48Z" | 33,669 | 124 | gliner | [
"gliner",
"pytorch",
"token-classification",
"multilingual",
"dataset:Universal-NER/Pile-NER-type",
"arxiv:2311.08526",
"license:cc-by-nc-4.0",
"region:us"
] | token-classification | "2024-02-16T20:30:48Z" | ---
license: cc-by-nc-4.0
language:
- multilingual
pipeline_tag: token-classification
datasets:
- Universal-NER/Pile-NER-type
library_name: gliner
---
# Model Card for GLiNER-multi
GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoder (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios.
This version has been trained on the **Pile-NER** dataset (Research purpose). Commercially permission versions are available (**urchade/gliner_smallv2**, **urchade/gliner_mediumv2**, **urchade/gliner_largev2**)
## Links
* Paper: https://arxiv.org/abs/2311.08526
* Repository: https://github.com/urchade/GLiNER
## Available models
| Release | Model Name | # of Parameters | Language | License |
| - | - | - | - | - |
| v0 | [urchade/gliner_base](https://huggingface.co/urchade/gliner_base)<br>[urchade/gliner_multi](https://huggingface.co/urchade/gliner_multi) | 209M<br>209M | English<br>Multilingual | cc-by-nc-4.0 |
| v1 | [urchade/gliner_small-v1](https://huggingface.co/urchade/gliner_small-v1)<br>[urchade/gliner_medium-v1](https://huggingface.co/urchade/gliner_medium-v1)<br>[urchade/gliner_large-v1](https://huggingface.co/urchade/gliner_large-v1) | 166M<br>209M<br>459M | English <br> English <br> English | cc-by-nc-4.0 |
| v2 | [urchade/gliner_small-v2](https://huggingface.co/urchade/gliner_small-v2)<br>[urchade/gliner_medium-v2](https://huggingface.co/urchade/gliner_medium-v2)<br>[urchade/gliner_large-v2](https://huggingface.co/urchade/gliner_large-v2) | 166M<br>209M<br>459M | English <br> English <br> English | apache-2.0 |
| v2.1 | [urchade/gliner_small-v2.1](https://huggingface.co/urchade/gliner_small-v2.1)<br>[urchade/gliner_medium-v2.1](https://huggingface.co/urchade/gliner_medium-v2.1)<br>[urchade/gliner_large-v2.1](https://huggingface.co/urchade/gliner_large-v2.1) <br>[urchade/gliner_multi-v2.1](https://huggingface.co/urchade/gliner_multi-v2.1) | 166M<br>209M<br>459M<br>209M | English <br> English <br> English <br> Multilingual | apache-2.0 |
## Installation
To use this model, you must install the GLiNER Python library:
```
!pip install gliner
```
## Usage
Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("urchade/gliner_multi")
text = """
Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kɾiʃˈtjɐnu ʁɔˈnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time.
"""
labels = ["person", "award", "date", "competitions", "teams"]
entities = model.predict_entities(text, labels)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
```
Cristiano Ronaldo dos Santos Aveiro => person
5 February 1985 => date
Saudi Pro League => competitions
Al Nassr => teams
Portugal national team => teams
Ballon d'Or => award
UEFA Men's Player of the Year Awards => award
European Golden Shoes => award
UEFA Champions Leagues => competitions
UEFA European Championship => competitions
UEFA Nations League => competitions
Champions League => competitions
European Championship => competitions
```
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("urchade/gliner_multi")
text = """
Это старый-добрый Римантадин, только в сиропе.
"""
# Gold: Римантадин - Drugname, сиропе - Drugform
labels = ["Drugname", "Drugform"]
entities = model.predict_entities(text, labels)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
```
Римантадин => Drugname
сиропе => Drugform
```
## Named Entity Recognition benchmark result

## Model Authors
The model authors are:
* [Urchade Zaratiana](https://huggingface.co/urchade)
* Nadi Tomeh
* Pierre Holat
* Thierry Charnois
## Citation
```bibtex
@misc{zaratiana2023gliner,
title={GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer},
author={Urchade Zaratiana and Nadi Tomeh and Pierre Holat and Thierry Charnois},
year={2023},
eprint={2311.08526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
shihaozz/hg-rl-cartpole-v1 | shihaozz | "2025-02-19T23:27:09Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2025-02-19T22:54:07Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: hg-rl-cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ShuhuaiRen/NBP-ucf-3b | ShuhuaiRen | "2025-02-18T05:26:49Z" | 0 | 0 | null | [
"video-to-video",
"arxiv:2502.07737",
"license:mit",
"region:us"
] | null | "2025-02-09T04:53:53Z" | ---
pipeline_tag: video-to-video
license: mit
---
This repository contains the model described in [Next Block Prediction: Video Generation via Semi-Autoregressive Modeling](https://hf.co/papers/2502.07737).
Project page: https://renshuhuai-andy.github.io/NBP-project/ |
nichelia/qbloom-medical | nichelia | "2023-10-17T11:26:36Z" | 1 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-10-17T09:12:12Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
nttx/bd732d48-7e9a-4552-90a2-0313e5715cf9 | nttx | "2025-01-20T14:17:36Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-20T13:49:35Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bd732d48-7e9a-4552-90a2-0313e5715cf9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 0885869d04f22c1c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0885869d04f22c1c_train_data.json
type:
field_input: reasoning
field_instruction: user
field_output: assistant
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/bd732d48-7e9a-4552-90a2-0313e5715cf9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/0885869d04f22c1c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 63011dc4-7765-40be-8432-883189f06f96
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 63011dc4-7765-40be-8432-883189f06f96
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# bd732d48-7e9a-4552-90a2-0313e5715cf9
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6953 | 0.0092 | 1 | 1.5332 |
| 0.8849 | 0.4619 | 50 | 0.9943 |
| 1.0907 | 0.9238 | 100 | 0.9804 |
| 0.8653 | 1.3857 | 150 | 0.9950 |
| 0.9203 | 1.8476 | 200 | 0.9946 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jethrowang/vanilla-whisper-tiny | jethrowang | "2025-03-10T15:46:07Z" | 3 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"zh",
"dataset:formospeech/hat_asr_aligned",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"region:us"
] | null | "2024-08-08T18:59:25Z" | ---
language:
- zh
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- formospeech/hat_asr_aligned
model-index:
- name: Whisper Tiny Hakka Condenser
results: []
metrics:
- cer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Hakka Condenser
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the HAT ASR Aligned dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1729
- Cer: 10.2307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1521
- training_steps: 15210
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:-----:|:---------------:|:-------:|
| 0.2476 | 0.9993 | 1521 | 0.4437 | 23.6551 |
| 0.0892 | 1.9987 | 3042 | 0.2482 | 14.6693 |
| 0.0543 | 2.9980 | 4563 | 0.2007 | 11.1774 |
| 0.0361 | 3.9974 | 6084 | 0.1847 | 12.4939 |
| 0.0235 | 4.9967 | 7605 | 0.1791 | 10.5405 |
| 0.0157 | 5.9961 | 9126 | 0.1727 | 10.9000 |
| 0.0121 | 6.9954 | 10647 | 0.1724 | 11.1554 |
| 0.0082 | 7.9947 | 12168 | 0.1720 | 10.3694 |
| 0.0059 | 8.9941 | 13689 | 0.1732 | 10.4053 |
| 0.0049 | 9.9934 | 15210 | 0.1729 | 10.2307 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
lesso14/154ee374-36d8-4738-9f73-aa00913f4ed6 | lesso14 | "2025-03-05T12:24:25Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2025-03-03T13:38:02Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 154ee374-36d8-4738-9f73-aa00913f4ed6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 154ee374-36d8-4738-9f73-aa00913f4ed6
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000214
- train_batch_size: 4
- eval_batch_size: 4
- seed: 140
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 0.3894 |
| 2.1295 | 0.0728 | 500 | 0.2826 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kokovova/3809f105-de1d-4d2f-b663-d38b8c039e4a | kokovova | "2025-01-11T07:39:08Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:llamafactory/tiny-random-Llama-3",
"base_model:adapter:llamafactory/tiny-random-Llama-3",
"license:apache-2.0",
"region:us"
] | null | "2025-01-11T07:38:31Z" | ---
library_name: peft
license: apache-2.0
base_model: llamafactory/tiny-random-Llama-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3809f105-de1d-4d2f-b663-d38b8c039e4a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: llamafactory/tiny-random-Llama-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5d93b51dfa8d54b5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5d93b51dfa8d54b5_train_data.json
type:
field_input: context
field_instruction: query
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kokovova/3809f105-de1d-4d2f-b663-d38b8c039e4a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/5d93b51dfa8d54b5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 226a9016-ce3e-4986-b2c2-647820c7339a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 226a9016-ce3e-4986-b2c2-647820c7339a
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 3809f105-de1d-4d2f-b663-d38b8c039e4a
This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co/llamafactory/tiny-random-Llama-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 11.7649 |
| 11.7649 | 0.0038 | 8 | 11.7645 |
| 11.7637 | 0.0077 | 16 | 11.7633 |
| 11.7629 | 0.0115 | 24 | 11.7625 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
KatyTheCutie/Repose-V2-2B | KatyTheCutie | "2025-02-12T08:30:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Delta-Vector/Rei-12B",
"base_model:merge:Delta-Vector/Rei-12B",
"base_model:PygmalionAI/Eleusis-12B",
"base_model:merge:PygmalionAI/Eleusis-12B",
"base_model:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:merge:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:redrix/GodSlayer-12B-ABYSS",
"base_model:merge:redrix/GodSlayer-12B-ABYSS",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-12T08:22:44Z" | ---
base_model:
- redrix/GodSlayer-12B-ABYSS
- Delta-Vector/Rei-12B
- inflatebot/MN-12B-Mag-Mell-R1
- PygmalionAI/Eleusis-12B
library_name: transformers
tags:
- mergekit
- merge
---
Repose 2B

Test model 3 of 3
Feedback is welcome!~ |
TheBloke/SOLARC-MOE-10.7Bx4-GGUF | TheBloke | "2023-12-28T17:08:48Z" | 220 | 19 | transformers | [
"transformers",
"gguf",
"mixtral",
"text-generation",
"ko",
"base_model:DopeorNope/SOLARC-MOE-10.7Bx4",
"base_model:quantized:DopeorNope/SOLARC-MOE-10.7Bx4",
"license:cc-by-nc-sa-4.0",
"region:us",
"conversational"
] | text-generation | "2023-12-28T14:17:15Z" | ---
base_model: DopeorNope/SOLARC-MOE-10.7Bx4
inference: false
language:
- ko
library_name: transformers
license: cc-by-nc-sa-4.0
model_creator: Seungyoo Lee
model_name: Solarc MOE 10.7Bx4
model_type: mixtral
pipeline_tag: text-generation
prompt_template: '### User:
{prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Solarc MOE 10.7Bx4 - GGUF
- Model creator: [Seungyoo Lee](https://huggingface.co/DopeorNope)
- Original model: [Solarc MOE 10.7Bx4](https://huggingface.co/DopeorNope/SOLARC-MOE-10.7Bx4)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Seungyoo Lee's Solarc MOE 10.7Bx4](https://huggingface.co/DopeorNope/SOLARC-MOE-10.7Bx4).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SOLARC-MOE-10.7Bx4-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SOLARC-MOE-10.7Bx4-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SOLARC-MOE-10.7Bx4-GGUF)
* [Seungyoo Lee's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/DopeorNope/SOLARC-MOE-10.7Bx4)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: User-Assistant-Newlines
```
### User:
{prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [solarc-moe-10.7bx4.Q2_K.gguf](https://huggingface.co/TheBloke/SOLARC-MOE-10.7Bx4-GGUF/blob/main/solarc-moe-10.7bx4.Q2_K.gguf) | Q2_K | 2 | 12.02 GB| 14.52 GB | smallest, significant quality loss - not recommended for most purposes |
| [solarc-moe-10.7bx4.Q3_K_M.gguf](https://huggingface.co/TheBloke/SOLARC-MOE-10.7Bx4-GGUF/blob/main/solarc-moe-10.7bx4.Q3_K_M.gguf) | Q3_K_M | 3 | 15.70 GB| 18.20 GB | very small, high quality loss |
| [solarc-moe-10.7bx4.Q4_0.gguf](https://huggingface.co/TheBloke/SOLARC-MOE-10.7Bx4-GGUF/blob/main/solarc-moe-10.7bx4.Q4_0.gguf) | Q4_0 | 4 | 20.34 GB| 22.84 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [solarc-moe-10.7bx4.Q4_K_M.gguf](https://huggingface.co/TheBloke/SOLARC-MOE-10.7Bx4-GGUF/blob/main/solarc-moe-10.7bx4.Q4_K_M.gguf) | Q4_K_M | 4 | 20.37 GB| 22.87 GB | medium, balanced quality - recommended |
| [solarc-moe-10.7bx4.Q5_0.gguf](https://huggingface.co/TheBloke/SOLARC-MOE-10.7Bx4-GGUF/blob/main/solarc-moe-10.7bx4.Q5_0.gguf) | Q5_0 | 5 | 24.84 GB| 27.34 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [solarc-moe-10.7bx4.Q5_K_M.gguf](https://huggingface.co/TheBloke/SOLARC-MOE-10.7Bx4-GGUF/blob/main/solarc-moe-10.7bx4.Q5_K_M.gguf) | Q5_K_M | 5 | 24.85 GB| 27.35 GB | large, very low quality loss - recommended |
| [solarc-moe-10.7bx4.Q6_K.gguf](https://huggingface.co/TheBloke/SOLARC-MOE-10.7Bx4-GGUF/blob/main/solarc-moe-10.7bx4.Q6_K.gguf) | Q6_K | 6 | 29.62 GB| 32.12 GB | very large, extremely low quality loss |
| [solarc-moe-10.7bx4.Q8_0.gguf](https://huggingface.co/TheBloke/SOLARC-MOE-10.7Bx4-GGUF/blob/main/solarc-moe-10.7bx4.Q8_0.gguf) | Q8_0 | 8 | 38.36 GB| 40.86 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/SOLARC-MOE-10.7Bx4-GGUF and below it, a specific filename to download, such as: solarc-moe-10.7bx4.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/SOLARC-MOE-10.7Bx4-GGUF solarc-moe-10.7bx4.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/SOLARC-MOE-10.7Bx4-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SOLARC-MOE-10.7Bx4-GGUF solarc-moe-10.7bx4.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m solarc-moe-10.7bx4.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### User:\n{prompt}\n\n### Assistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./solarc-moe-10.7bx4.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"### User:\n{prompt}\n\n### Assistant:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./solarc-moe-10.7bx4.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Seungyoo Lee's Solarc MOE 10.7Bx4
**The license is `cc-by-nc-sa-4.0`.**
# **🐻❄️SOLARC-MOE-10.7Bx4🐻❄️**

## Model Details
**Model Developers** Seungyoo Lee(DopeorNope)
I am in charge of Large Language Models (LLMs) at Markr AI team in South Korea.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
SOLARC-MOE-10.7Bx4 is an auto-regressive language model based on the SOLAR architecture.
---
## **Base Model**
[kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct)
[Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct](https://huggingface.co/Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct)
[VAGOsolutions/SauerkrautLM-SOLAR-Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-SOLAR-Instruct)
[fblgit/UNA-SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/fblgit/UNA-SOLAR-10.7B-Instruct-v1.0)
## **Implemented Method**
I have built a model using the Mixture of Experts (MOE) approach, utilizing each of these models as the base.
---
# Implementation Code
## Load model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "DopeorNope/SOLARC-MOE-10.7Bx4"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
<!-- original-model-card end -->
|
Shina1234/1234 | Shina1234 | "2024-05-14T20:49:26Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-05-14T20:49:26Z" | ---
license: creativeml-openrail-m
---
|
hgnoi/q5YO55GUmRSS3KQt | hgnoi | "2024-05-25T15:56:59Z" | 78 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-25T15:54:36Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
silviasapora/gemma-7b-sft-simpo-basic-5e-7-005-v132 | silviasapora | "2025-03-31T00:22:06Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"orpo",
"conversational",
"dataset:argilla/dpo-mix-7k",
"arxiv:2403.07691",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-30T23:32:57Z" | ---
datasets:
- argilla/dpo-mix-7k
library_name: transformers
model_name: /home/silvias/docker/alignment-handbook/data/gemma-7b-sft-basic-5e-5-00-v130-full
tags:
- generated_from_trainer
- alignment-handbook
- trl
- orpo
licence: license
---
# Model Card for /home/silvias/docker/alignment-handbook/data/gemma-7b-sft-basic-5e-5-00-v130-full
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [['argilla/dpo-mix-7k']](https://huggingface.co/datasets/['argilla/dpo-mix-7k']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="silviasapora/gemma-7b-sft-simpo-basic-5e-7-005-v132", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/silvias/huggingface/runs/ad4keqiq)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Alfa2166/distilbert-base-uncased-lora-text-classification | Alfa2166 | "2025-04-03T10:33:19Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | "2025-04-03T10:33:16Z" | ---
library_name: peft
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-lora-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6274
- Accuracy: {'accuracy': 0.898}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|
| No log | 1.0 | 250 | 0.5209 | {'accuracy': 0.854} |
| 0.4334 | 2.0 | 500 | 0.4871 | {'accuracy': 0.871} |
| 0.4334 | 3.0 | 750 | 0.4843 | {'accuracy': 0.892} |
| 0.1658 | 4.0 | 1000 | 0.6047 | {'accuracy': 0.893} |
| 0.1658 | 5.0 | 1250 | 0.6274 | {'accuracy': 0.898} |
### Framework versions
- PEFT 0.14.0
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
milenarmus/TTB_tallying_noisy_flipped_choice_shuffled_cue_order-model_noise0.8 | milenarmus | "2024-06-19T19:52:47Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-19T19:48:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ichigoberry/pandafish-3-7B-32k-Q2_K-GGUF | ichigoberry | "2024-04-05T19:14:13Z" | 3 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-04-05T19:14:01Z" | ---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# ichigoberry/pandafish-3-7B-32k-Q2_K-GGUF
This model was converted to GGUF format from [`ichigoberry/pandafish-3-7B-32k`](https://huggingface.co/ichigoberry/pandafish-3-7B-32k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ichigoberry/pandafish-3-7B-32k) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo ichigoberry/pandafish-3-7B-32k-Q2_K-GGUF --model pandafish-3-7b-32k.Q2_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo ichigoberry/pandafish-3-7B-32k-Q2_K-GGUF --model pandafish-3-7b-32k.Q2_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pandafish-3-7b-32k.Q2_K.gguf -n 128
```
|
skarsa/babe_source_subsamples_model_alpha_100_idx_3 | skarsa | "2025-02-11T11:55:46Z" | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-15T15:35:01Z" | ---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: babe_source_subsamples_model_alpha_100_idx_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# babe_source_subsamples_model_alpha_100_idx_3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
WYNN747/Burmese-GPT-qa_sys7_main_no_ovr | WYNN747 | "2024-01-18T05:38:18Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-18T05:15:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
You are an advanced AI chatbot programmed to understand and respond in Burmese. You provide accurate, concise, and contextually relevant answers to a wide range of questions.
Question: "ရန်ကုန်မြို့ အကြောင်းပြောပါ?" ### Answer:
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vivekbiragoni/distilroberta-base-finetuned-wikitext2 | vivekbiragoni | "2023-12-05T07:52:02Z" | 3 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-12-05T07:43:43Z" | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_keras_callback
model-index:
- name: vivekbiragoni/distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vivekbiragoni/distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.1545
- Validation Loss: 1.9310
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.1545 | 1.9310 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
zxf945/sks-dog | zxf945 | "2023-06-05T03:52:10Z" | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-06-02T06:47:14Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - zxf945/sks-dog
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
YeungNLP/LongQLoRA-Vicuna-13b-8k | YeungNLP | "2023-12-18T14:50:24Z" | 1,433 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2311.04879",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-08T07:18:02Z" | ---
license: apache-2.0
language:
- en
---
# LongQLoRA: Efficient and Effective Method to Extend Context Length of LLMs
## Technical Report
Technical Report: [LongQLoRA: Efficient and Effective Method to Extend Context Length of Large Language Models](https://arxiv.org/abs/2311.04879)
## Introduction
LongQLoRA is a memory-efficient and effective method to extend context length of Large Language Models with less training GPUs.
**On a single 32GB V100 GPU**, LongQLoRA can extend the context length of LLaMA2 7B and 13B from 4096 to 8192 and even to 12k.
LongQLoRA achieves competitive perplexity performance on PG19 and Proof-pile dataset after only 1000 finetuning steps, our model outperforms LongLoRA and is very close to MPT-7B-8K.
Evaluation perplexity on PG19 validation and Proof-pile test datasets in evaluation context length of 8192:
| Model | PG19 | Proof-pile |
|---------------------|----------|------------|
| LLaMA2-7B | \>1000 | \>1000 |
| MPT-7B-8K | 7.98 | 2.67 |
| LongLoRA-LoRA-7B-8K | 8.20 | 2.78 |
| LongLoRA-Full-7B-8K | 7.93 | 2.73 |
| **LongQLoRA-7B-8K** | **7.96** | **2.73** | |
eli4s/Bert-L12-h256-A4 | eli4s | "2021-08-17T07:40:05Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | This model was pretrained on the bookcorpus dataset using knowledge distillation.
The particularity of this model is that even though it shares the same architecture as BERT, it has a hidden size of 256. Since it has 4 attention heads, the head size is 64 just as for the BERT base model.
The knowledge distillation was performed using multiple loss functions.
The weights of the model were initialized from scratch.
PS : the tokenizer is the same as the one of the model bert-base-uncased.
To load the model \& tokenizer :
````python
from transformers import AutoModelForMaskedLM, BertTokenizer
model_name = "eli4s/Bert-L12-h256-A4"
model = AutoModelForMaskedLM.from_pretrained(model_name)
tokenizer = BertTokenizer.from_pretrained(model_name)
````
To use it as a masked language model :
````python
import torch
sentence = "Let's have a [MASK]."
model.eval()
inputs = tokenizer([sentence], padding='longest', return_tensors='pt')
output = model(inputs['input_ids'], attention_mask=inputs['attention_mask'])
mask_index = inputs['input_ids'].tolist()[0].index(103)
masked_token = output['logits'][0][mask_index].argmax(axis=-1)
predicted_token = tokenizer.decode(masked_token)
print(predicted_token)
````
Or we can also predict the n most relevant predictions :
````python
top_n = 5
vocab_size = model.config.vocab_size
logits = output['logits'][0][mask_index].tolist()
top_tokens = sorted(list(range(vocab_size)), key=lambda i:logits[i], reverse=True)[:top_n]
tokenizer.decode(top_tokens)
````
|
mradermacher/Mistral-MetaMath-7b-i1-GGUF | mradermacher | "2025-03-13T07:32:00Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:HanningZhang/Mistral-MetaMath-7b",
"base_model:quantized:HanningZhang/Mistral-MetaMath-7b",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2025-03-13T06:50:53Z" | ---
base_model: HanningZhang/Mistral-MetaMath-7b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/HanningZhang/Mistral-MetaMath-7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mistral-MetaMath-7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-MetaMath-7b-i1-GGUF/resolve/main/Mistral-MetaMath-7b.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Primeness/cyrus5 | Primeness | "2025-02-20T07:34:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-20T07:02:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
perkros/netlist-mistral-80L | perkros | "2025-03-07T13:00:32Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-07T12:58:34Z" | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** perkros
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Xu-Ouyang/pythia-6.9b-deduped-int4-step107000-bnb | Xu-Ouyang | "2024-07-26T20:11:51Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-26T20:09:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MayBashendy/ArabicNewSplits_FineTuningAraBERT_noAug_task5_organization_fold0 | MayBashendy | "2024-11-27T06:33:10Z" | 184 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-27T06:32:08Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits_FineTuningAraBERT_noAug_task5_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits_FineTuningAraBERT_noAug_task5_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1901
- Qwk: 0.2697
- Mse: 1.1901
- Rmse: 1.0909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 2.0 | 2 | 1.9964 | 0.1135 | 1.9964 | 1.4129 |
| No log | 4.0 | 4 | 1.3526 | 0.3147 | 1.3526 | 1.1630 |
| No log | 6.0 | 6 | 1.2744 | 0.2324 | 1.2744 | 1.1289 |
| No log | 8.0 | 8 | 1.1907 | 0.2172 | 1.1907 | 1.0912 |
| No log | 10.0 | 10 | 1.1901 | 0.2697 | 1.1901 | 1.0909 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
rbonazzola/distilbert-base-uncased-finetuned-ner | rbonazzola | "2024-10-21T22:05:25Z" | 71 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-10-21T17:58:21Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: rbonazzola/distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# rbonazzola/distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0336
- Validation Loss: 0.0604
- Train Precision: 0.9208
- Train Recall: 0.9348
- Train F1: 0.9277
- Train Accuracy: 0.9831
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.1929 | 0.0717 | 0.8951 | 0.9179 | 0.9063 | 0.9789 | 0 |
| 0.0537 | 0.0613 | 0.9240 | 0.9299 | 0.9269 | 0.9828 | 1 |
| 0.0336 | 0.0604 | 0.9208 | 0.9348 | 0.9277 | 0.9831 | 2 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.17.0
- Datasets 3.0.1
- Tokenizers 0.19.1
|
osanseviero/sft_cml4 | osanseviero | "2024-01-21T13:39:28Z" | 91 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:ag_news",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-10-22T13:59:03Z" | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
datasets:
- ag_news
model-index:
- name: sft_cml4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_cml4
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7271 | 0.32 | 200 | 3.6065 |
| 3.346 | 0.64 | 400 | 3.4732 |
| 3.0685 | 0.96 | 600 | 3.3985 |
| 2.1435 | 1.28 | 800 | 3.4433 |
| 1.9834 | 1.6 | 1000 | 3.4203 |
| 1.8937 | 1.92 | 1200 | 3.3980 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.14.0
|
DrNicefellow/Qwen1.5-7B-Chat-8bpw-h8-exl2 | DrNicefellow | "2024-02-19T02:28:59Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-19T00:50:33Z" | ---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/LICENSE
---
# Qwen1.5-7B-Chat-8.0bpw-h8-exl2
This is a .0bpw/h8 quantized version of [Qwen/Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) made with [exllamav2](https://github.com/turboderp/exllamav2).
To run this, make sure you installed the up-to-date version of Exllamav2.
## License
This project is distributed under the Tongyi Qianwen LICENSE AGREEMENT. See the [LICENSE](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/LICENSE) file for more information.
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
featherless-ai-quants/allenai-tulu-v2.5-dpo-13b-uf-mean-GGUF | featherless-ai-quants | "2024-11-10T19:47:03Z" | 15 | 0 | null | [
"gguf",
"text-generation",
"base_model:allenai/tulu-v2.5-dpo-13b-uf-mean",
"base_model:quantized:allenai/tulu-v2.5-dpo-13b-uf-mean",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-11-07T02:03:48Z" | ---
base_model: allenai/tulu-v2.5-dpo-13b-uf-mean
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# allenai/tulu-v2.5-dpo-13b-uf-mean GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [allenai-tulu-v2.5-dpo-13b-uf-mean-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/allenai-tulu-v2.5-dpo-13b-uf-mean-GGUF/blob/main/allenai-tulu-v2.5-dpo-13b-uf-mean-IQ4_XS.gguf) | 6694.33 MB |
| Q2_K | [allenai-tulu-v2.5-dpo-13b-uf-mean-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/allenai-tulu-v2.5-dpo-13b-uf-mean-GGUF/blob/main/allenai-tulu-v2.5-dpo-13b-uf-mean-Q2_K.gguf) | 4629.39 MB |
| Q3_K_L | [allenai-tulu-v2.5-dpo-13b-uf-mean-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/allenai-tulu-v2.5-dpo-13b-uf-mean-GGUF/blob/main/allenai-tulu-v2.5-dpo-13b-uf-mean-Q3_K_L.gguf) | 6608.54 MB |
| Q3_K_M | [allenai-tulu-v2.5-dpo-13b-uf-mean-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/allenai-tulu-v2.5-dpo-13b-uf-mean-GGUF/blob/main/allenai-tulu-v2.5-dpo-13b-uf-mean-Q3_K_M.gguf) | 6044.17 MB |
| Q3_K_S | [allenai-tulu-v2.5-dpo-13b-uf-mean-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/allenai-tulu-v2.5-dpo-13b-uf-mean-GGUF/blob/main/allenai-tulu-v2.5-dpo-13b-uf-mean-Q3_K_S.gguf) | 5396.83 MB |
| Q4_K_M | [allenai-tulu-v2.5-dpo-13b-uf-mean-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/allenai-tulu-v2.5-dpo-13b-uf-mean-GGUF/blob/main/allenai-tulu-v2.5-dpo-13b-uf-mean-Q4_K_M.gguf) | 7501.56 MB |
| Q4_K_S | [allenai-tulu-v2.5-dpo-13b-uf-mean-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/allenai-tulu-v2.5-dpo-13b-uf-mean-GGUF/blob/main/allenai-tulu-v2.5-dpo-13b-uf-mean-Q4_K_S.gguf) | 7079.30 MB |
| Q5_K_M | [allenai-tulu-v2.5-dpo-13b-uf-mean-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/allenai-tulu-v2.5-dpo-13b-uf-mean-GGUF/blob/main/allenai-tulu-v2.5-dpo-13b-uf-mean-Q5_K_M.gguf) | 8802.34 MB |
| Q5_K_S | [allenai-tulu-v2.5-dpo-13b-uf-mean-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/allenai-tulu-v2.5-dpo-13b-uf-mean-GGUF/blob/main/allenai-tulu-v2.5-dpo-13b-uf-mean-Q5_K_S.gguf) | 8556.64 MB |
| Q6_K | [allenai-tulu-v2.5-dpo-13b-uf-mean-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/allenai-tulu-v2.5-dpo-13b-uf-mean-GGUF/blob/main/allenai-tulu-v2.5-dpo-13b-uf-mean-Q6_K.gguf) | 10184.42 MB |
| Q8_0 | [allenai-tulu-v2.5-dpo-13b-uf-mean-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/allenai-tulu-v2.5-dpo-13b-uf-mean-GGUF/blob/main/allenai-tulu-v2.5-dpo-13b-uf-mean-Q8_0.gguf) | 13190.58 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
thangla01/025bcbac-958c-4eb3-8626-52674cb368e8 | thangla01 | "2025-01-24T00:06:30Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-23T22:54:31Z" | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 025bcbac-958c-4eb3-8626-52674cb368e8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2f9d17f500743687_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2f9d17f500743687_train_data.json
type:
field_instruction: text
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thangla01/025bcbac-958c-4eb3-8626-52674cb368e8
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2f9d17f500743687_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 98af1e55-ce5e-4ee3-ab3e-4e976ed9c6af
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 98af1e55-ce5e-4ee3-ab3e-4e976ed9c6af
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 025bcbac-958c-4eb3-8626-52674cb368e8
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7723 | 0.0047 | 200 | 1.9292 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LykaAustria/nicpras_finetuned_yolo | LykaAustria | "2025-01-03T08:14:11Z" | 7 | 0 | transformers | [
"transformers",
"object-detection",
"yolo",
"custom-model",
"finetuned",
"license:agpl-3.0",
"endpoints_compatible",
"region:us"
] | object-detection | "2025-01-02T02:09:17Z" | ---
license: agpl-3.0
library_name: transformers
pipeline_tag: object-detection
tags:
- object-detection
- yolo
- custom-model
- finetuned
---
# LykaAustria/nicpras_finetuned_yolo
This is a fine-tuned YOLO model trained for object detection on a custom dataset.
## Model Details
- **Base Model:** YOLOv3
- **Fine-tuned On:** [Dataset Name]
- **Task:** Object Detection
- **Framework:** Ultralytics
## Intended Use
This model is designed for detecting objects in images. It works best for the following use cases:
- Use Case 1
- Use Case 2
## Configuration File
The configuration file (`config.yaml`) is required to use this model in CVAT. Download it: https://huggingface.co/LykaAustria/nicpras_finetuned_yolo/blob/main/config.yaml.
## How to Use
You can load this model using the `transformers` library as follows:
```python
from transformers import pipeline
# Load the model
model = pipeline("object-detection", model="LykaAustria/nicpras_finetuned_yolo")
# Run inference
results = model("path_to_image.jpg")
print(results)
|
TOMFORD79/JBL_TOM9 | TOMFORD79 | "2025-02-12T18:22:44Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-02-12T17:56:35Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
yogeshs710/mhtc-url-ft-3 | yogeshs710 | "2024-07-24T13:24:45Z" | 5 | 0 | transformers | [
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:quantized:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-24T13:22:50Z" | ---
base_model: unsloth/gemma-2b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- gguf
---
# Uploaded model
- **Developed by:** yogeshs710
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
GordonChang/bakeneko-instruct-finetuned-v1-merged-test-Q4_K_M-GGUF | GordonChang | "2025-03-18T09:17:23Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:GordonChang/bakeneko-instruct-finetuned-v1-merged-test",
"base_model:quantized:GordonChang/bakeneko-instruct-finetuned-v1-merged-test",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-18T09:15:48Z" | ---
base_model: GordonChang/bakeneko-instruct-finetuned-v1-merged-test
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- llama-cpp
- gguf-my-repo
---
# GordonChang/bakeneko-instruct-finetuned-v1-merged-test-Q4_K_M-GGUF
This model was converted to GGUF format from [`GordonChang/bakeneko-instruct-finetuned-v1-merged-test`](https://huggingface.co/GordonChang/bakeneko-instruct-finetuned-v1-merged-test) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/GordonChang/bakeneko-instruct-finetuned-v1-merged-test) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo GordonChang/bakeneko-instruct-finetuned-v1-merged-test-Q4_K_M-GGUF --hf-file bakeneko-instruct-finetuned-v1-merged-test-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo GordonChang/bakeneko-instruct-finetuned-v1-merged-test-Q4_K_M-GGUF --hf-file bakeneko-instruct-finetuned-v1-merged-test-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo GordonChang/bakeneko-instruct-finetuned-v1-merged-test-Q4_K_M-GGUF --hf-file bakeneko-instruct-finetuned-v1-merged-test-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo GordonChang/bakeneko-instruct-finetuned-v1-merged-test-Q4_K_M-GGUF --hf-file bakeneko-instruct-finetuned-v1-merged-test-q4_k_m.gguf -c 2048
```
|
Shrilaxmi/llama2-qlora-finetunined-french | Shrilaxmi | "2023-09-15T12:11:56Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-15T12:11:51Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
mrHunghddddd/0950ffa8-85d1-47fb-851a-ae364fd4d285 | mrHunghddddd | "2025-01-20T11:44:46Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:databricks/dolly-v2-3b",
"base_model:adapter:databricks/dolly-v2-3b",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-20T11:24:35Z" | ---
library_name: peft
license: mit
base_model: databricks/dolly-v2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0950ffa8-85d1-47fb-851a-ae364fd4d285
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: databricks/dolly-v2-3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0fdb745b22813a15_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0fdb745b22813a15_train_data.json
type:
field_input: rational_answer
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHunghddddd/0950ffa8-85d1-47fb-851a-ae364fd4d285
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/0fdb745b22813a15_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1f4ca561-cb4c-44b3-a55a-85ea32a3d504
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1f4ca561-cb4c-44b3-a55a-85ea32a3d504
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0950ffa8-85d1-47fb-851a-ae364fd4d285
This model is a fine-tuned version of [databricks/dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.376 | 0.2315 | 200 | 0.8943 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ClarenceDan/ff356de7-1145-4f99-82d6-ab72e9f0a01e | ClarenceDan | "2025-01-14T09:34:31Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | "2025-01-14T09:32:17Z" | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ff356de7-1145-4f99-82d6-ab72e9f0a01e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c72347a853cd6a0f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c72347a853cd6a0f_train_data.json
type:
field_input: num
field_instruction: title_main
field_output: texte
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/ff356de7-1145-4f99-82d6-ab72e9f0a01e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/c72347a853cd6a0f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d5e5ac88-0840-4409-8bd3-d1c3569952cf
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d5e5ac88-0840-4409-8bd3-d1c3569952cf
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ff356de7-1145-4f99-82d6-ab72e9f0a01e
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.201 | 0.0017 | 1 | 1.5077 |
| 5.9339 | 0.0050 | 3 | 1.5059 |
| 5.6117 | 0.0099 | 6 | 1.4910 |
| 6.7069 | 0.0149 | 9 | 1.4314 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kk-aivio/7b7b0525-dab2-4472-a321-0969e627a0cd | kk-aivio | "2025-01-16T16:56:21Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | "2025-01-16T16:55:22Z" | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7b7b0525-dab2-4472-a321-0969e627a0cd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2a2f0228484464e3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2a2f0228484464e3_train_data.json
type:
field_input: Case
field_instruction: Title
field_output: Summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/7b7b0525-dab2-4472-a321-0969e627a0cd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/2a2f0228484464e3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 392c1656-b507-42d3-94c2-758a96b60589
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 392c1656-b507-42d3-94c2-758a96b60589
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7b7b0525-dab2-4472-a321-0969e627a0cd
This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.6062 | 0.0029 | 1 | nan |
| 7.0016 | 0.0088 | 3 | nan |
| 6.3593 | 0.0176 | 6 | nan |
| 6.4283 | 0.0264 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MatouK98/test_1 | MatouK98 | "2024-06-06T05:36:37Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-06T05:36:37Z" | ---
license: apache-2.0
---
|
ENERGY-DRINK-LOVE/Qwen2.5-14B-Nhn-Dpo-V5.2-Adapter-input2k-Merged | ENERGY-DRINK-LOVE | "2025-03-13T02:37:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-13T02:30:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
andrijdavid/Marcoroni-7B-v3-GGUF | andrijdavid | "2023-12-27T14:05:16Z" | 34 | 0 | transformers | [
"transformers",
"pytorch",
"gguf",
"mistral",
"text-generation",
"GGUF",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-27T12:56:36Z" | ---
language:
- en
license: apache-2.0
tags:
- GGUF
quantized_by: andrijdavid
---
# Marcoroni-7B-v3-GGUF
- Original model: [Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: andrijdavid/Marcoroni-7B-v3-GGUF and below it, a specific filename to download, such as: Marcoroni-7B-v3-f16.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download andrijdavid/Marcoroni-7B-v3-GGUF Marcoroni-7B-v3-f16.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download andrijdavid/Marcoroni-7B-v3-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download andrijdavid/Marcoroni-7B-v3-GGUF Marcoroni-7B-v3-f16.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Marcoroni-7B-v3-f16.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Marcoroni-7B-v3-f16.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Marcoroni-7B-v3-f16.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Marcoroni-7B-v3
# Marcoroni-7B-v3
<img src="https://cdn-uploads.huggingface.co/production/uploads/637aebed7ce76c3b834cea37/20uN0wMu2zTyVGgXV9PIo.png" width = 60%>
# Updates
December 11, 2023:
Marcoroni-7B-v3 has placed **#5** overall and **#1** for 7 billion parameter models on the [Hugging Face Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)!
# Model Details
* **Trained by**: trained by AIDC AI-Business.
* **Model type:** **Marcoroni-7B-v3** is an auto-regressive language model based on mistralai/Mistral-7B-v0.1.
* **Language(s)**: English
This is a DPO fine tuned model of [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling).
We fine-tuned using 32k data generated by GPT-4 and other models.
# Prompting
## Prompt Template for alpaca style
```
### Instruction:
<prompt> (without the <>)
### Response:
```
<!-- original-model-card end --> |
jorge-henao/gpt2-small-spanish-disco-poetry-15 | jorge-henao | "2022-03-29T05:17:49Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-29T04:20:26Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-small-spanish-disco-poetry-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-small-spanish-disco-poetry-15
This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
SundayNwovu/todo-schedular-recent | SundayNwovu | "2023-09-06T11:14:54Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-06T11:12:42Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
haedahae/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-arctic_peckish_cheetah | haedahae | "2025-04-07T23:40:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am arctic peckish cheetah",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-06T12:21:30Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-arctic_peckish_cheetah
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am arctic peckish cheetah
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-arctic_peckish_cheetah
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="haedahae/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-arctic_peckish_cheetah", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
FounderOfHuggingface/gpt2_gen_lora_r16_wikitext2_t300_e20_member_shadow11 | FounderOfHuggingface | "2024-01-16T11:02:04Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2024-01-16T11:02:03Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
siddharthbulia/therapy-bot | siddharthbulia | "2023-09-02T16:55:11Z" | 24 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"therapist",
"medical",
"en",
"dataset:siddharthbulia/therapy-data-set-llama",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-09-02T16:34:22Z" | ---
license: apache-2.0
datasets:
- siddharthbulia/therapy-data-set-llama
language:
- en
tags:
- therapist
- medical
---
## Nintee Therapy Bot
Built an extremely helpful therapist bot who engages the patient and help the patient open up.
The Bot has extremely high patient, knows everything about therapy and mental well-being, empathetic to the patient, wants best for the patient and give actionable advise to the patient which patient can use to improve his day-to-day life.
Bot is trained on Data from [Pandora](https://www.kaggle.com/datasets/elvis23/mental-health-conversational-data) dataset.
In the V2 version of Bot, Nintee Bot will leverage real transcripts from patients and world class therapist with proper consent from patient with complete anonymity. |
pmranu/facebook-opt-dialogsum-finetuned | pmranu | "2025-02-15T09:26:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-15T09:25:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
c14kevincardenas/limbxy_seq_t2_heads2_layers1 | c14kevincardenas | "2025-02-20T04:22:07Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"custom_model",
"image-sequence-classification",
"vision",
"generated_from_trainer",
"base_model:c14kevincardenas/beit-large-patch16-384-limb",
"base_model:finetune:c14kevincardenas/beit-large-patch16-384-limb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-19T23:30:34Z" | ---
library_name: transformers
license: apache-2.0
base_model: c14kevincardenas/beit-large-patch16-384-limb
tags:
- image-sequence-classification
- vision
- generated_from_trainer
model-index:
- name: limbxy_seq_t2_heads2_layers1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# limbxy_seq_t2_heads2_layers1
This model is a fine-tuned version of [c14kevincardenas/beit-large-patch16-384-limb](https://huggingface.co/c14kevincardenas/beit-large-patch16-384-limb) on the c14kevincardenas/beta_caller_284_limbxy_seq_2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0048
- Rmse: 0.0692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2014
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0135 | 1.0 | 150 | 0.0123 | 0.1108 |
| 0.0102 | 2.0 | 300 | 0.0080 | 0.0892 |
| 0.0065 | 3.0 | 450 | 0.0107 | 0.1036 |
| 0.0049 | 4.0 | 600 | 0.0088 | 0.0936 |
| 0.0042 | 5.0 | 750 | 0.0072 | 0.0846 |
| 0.0033 | 6.0 | 900 | 0.0071 | 0.0841 |
| 0.0028 | 7.0 | 1050 | 0.0065 | 0.0806 |
| 0.0023 | 8.0 | 1200 | 0.0071 | 0.0842 |
| 0.0022 | 9.0 | 1350 | 0.0064 | 0.0802 |
| 0.0018 | 10.0 | 1500 | 0.0059 | 0.0766 |
| 0.0014 | 11.0 | 1650 | 0.0055 | 0.0739 |
| 0.0014 | 12.0 | 1800 | 0.0061 | 0.0781 |
| 0.0013 | 13.0 | 1950 | 0.0056 | 0.0748 |
| 0.0009 | 14.0 | 2100 | 0.0055 | 0.0743 |
| 0.0017 | 15.0 | 2250 | 0.0058 | 0.0762 |
| 0.0012 | 16.0 | 2400 | 0.0054 | 0.0736 |
| 0.0008 | 17.0 | 2550 | 0.0053 | 0.0725 |
| 0.0008 | 18.0 | 2700 | 0.0055 | 0.0740 |
| 0.0007 | 19.0 | 2850 | 0.0057 | 0.0757 |
| 0.0007 | 20.0 | 3000 | 0.0056 | 0.0746 |
| 0.0006 | 21.0 | 3150 | 0.0055 | 0.0739 |
| 0.0005 | 22.0 | 3300 | 0.0051 | 0.0717 |
| 0.0006 | 23.0 | 3450 | 0.0053 | 0.0727 |
| 0.0005 | 24.0 | 3600 | 0.0052 | 0.0720 |
| 0.0006 | 25.0 | 3750 | 0.0055 | 0.0741 |
| 0.0005 | 26.0 | 3900 | 0.0051 | 0.0714 |
| 0.0005 | 27.0 | 4050 | 0.0052 | 0.0720 |
| 0.0005 | 28.0 | 4200 | 0.0053 | 0.0725 |
| 0.0003 | 29.0 | 4350 | 0.0051 | 0.0712 |
| 0.0004 | 30.0 | 4500 | 0.0051 | 0.0717 |
| 0.0004 | 31.0 | 4650 | 0.0052 | 0.0719 |
| 0.0003 | 32.0 | 4800 | 0.0052 | 0.0720 |
| 0.0003 | 33.0 | 4950 | 0.0051 | 0.0715 |
| 0.0002 | 34.0 | 5100 | 0.0053 | 0.0731 |
| 0.0003 | 35.0 | 5250 | 0.0052 | 0.0723 |
| 0.0002 | 36.0 | 5400 | 0.0050 | 0.0708 |
| 0.0002 | 37.0 | 5550 | 0.0049 | 0.0703 |
| 0.0002 | 38.0 | 5700 | 0.0050 | 0.0708 |
| 0.0002 | 39.0 | 5850 | 0.0049 | 0.0700 |
| 0.0002 | 40.0 | 6000 | 0.0049 | 0.0698 |
| 0.0002 | 41.0 | 6150 | 0.0049 | 0.0699 |
| 0.0002 | 42.0 | 6300 | 0.0049 | 0.0701 |
| 0.0001 | 43.0 | 6450 | 0.0049 | 0.0697 |
| 0.0002 | 44.0 | 6600 | 0.0049 | 0.0698 |
| 0.0001 | 45.0 | 6750 | 0.0048 | 0.0696 |
| 0.0001 | 46.0 | 6900 | 0.0048 | 0.0692 |
| 0.0001 | 47.0 | 7050 | 0.0048 | 0.0694 |
| 0.0001 | 48.0 | 7200 | 0.0048 | 0.0694 |
| 0.0001 | 49.0 | 7350 | 0.0048 | 0.0692 |
| 0.0001 | 50.0 | 7500 | 0.0048 | 0.0693 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
zemaia/exponentiall-xtract-7B-v01-based-finetuned-T4-sharded-4bit-notmerged | zemaia | "2023-10-29T21:39:06Z" | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:alexsherstinsky/Mistral-7B-v0.1-sharded",
"base_model:adapter:alexsherstinsky/Mistral-7B-v0.1-sharded",
"region:us"
] | null | "2023-10-29T21:38:48Z" | ---
library_name: peft
base_model: alexsherstinsky/Mistral-7B-v0.1-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
Liogl/RL-Course | Liogl | "2023-12-03T18:48:43Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-03T18:48:06Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO-MLP
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.46 +/- 32.22
name: mean_reward
verified: false
---
# **PPO-MLP** Agent playing **LunarLander-v2**
This is a trained model of a **PPO-MLP** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
avinot/LoLlama3.2-1B-lora-5ep | avinot | "2025-04-10T02:21:11Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | "2025-04-10T01:20:09Z" | ---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
model-index:
- name: LoLlama3.2-1B-lora-5ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoLlama3.2-1B-lora-5ep
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1949 | 1.0 | 847 | 2.9763 |
| 2.8805 | 2.0 | 1694 | 2.8830 |
| 2.8078 | 3.0 | 2541 | 2.8345 |
| 2.7723 | 4.0 | 3388 | 2.8094 |
| 2.7464 | 5.0 | 4235 | 2.8014 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0 |
HiTZ/judge-eus | HiTZ | "2024-11-22T10:48:26Z" | 8 | 1 | null | [
"safetensors",
"text-generation",
"eu",
"dataset:BAAI/JudgeLM-100K",
"base_model:orai-nlp/Llama-eus-8B",
"base_model:finetune:orai-nlp/Llama-eus-8B",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-10-30T15:30:35Z" | ---
license: apache-2.0
datasets:
- BAAI/JudgeLM-100K
language:
- eu
base_model:
- orai-nlp/Llama-eus-8B
pipeline_tag: text-generation
---
HiTZ/judge-eus is a language model designed to evaluate Basque text.
It was developed for the [MCG-COLING-2025 Shared Task](<https://sites.google.com/view/multilang-counterspeech-gen/shared-task>), which focused on generating counter-narratives against hate speech using a knowledge-based corpus specifically designed for the task. The model served to evaluate the quality of these counter-narratives, assessing their ability to address and mitigate hate speech effectively. The complete code for task evaluation is available in the [hitz-zentroa/eval-MCG-COLING-2025](https://github.com/hitz-zentroa/eval-MCG-COLING-2025?tab=readme-ov-file) repository. |
InsultedByMathematics/alpha_1e-3_beta_4e-3 | InsultedByMathematics | "2025-02-01T18:53:00Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-01T18:48:56Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ncbateman/fc2ba2fb-1ff0-4e66-b0bb-667bfdfd0d59 | ncbateman | "2024-11-07T02:00:27Z" | 48 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2024-11-06T22:16:57Z" | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fc2ba2fb-1ff0-4e66-b0bb-667bfdfd0d59
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
dataset_processes: 12
datasets:
- data_files:
- databricks-dolly-15k_train_data.json
ds_type: json
path: /workspace/input_data/databricks-dolly-15k_train_data.json
type:
field_input: instruction
field_instruction: context
field_output: response
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 512
eval_table_size: null
evals_per_epoch: 2
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: ncbateman/fc2ba2fb-1ff0-4e66-b0bb-667bfdfd0d59
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2000
micro_batch_size: 2
mlflow_experiment_name: /tmp/databricks-dolly-15k_train_data.json
model_type: LlamaForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
save_strategy: steps
sequence_len: 4096
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
val_set_size: 0.05
wandb_entity: breakfasthut
wandb_mode: online
wandb_project: tuning-miner
wandb_run: miner
wandb_runid: fc2ba2fb-1ff0-4e66-b0bb-667bfdfd0d59
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# fc2ba2fb-1ff0-4e66-b0bb-667bfdfd0d59
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4105
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 443
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1698 | 0.0023 | 1 | 2.0832 |
| 1.4209 | 0.5014 | 222 | 1.4105 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.4.1+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kadirnar/emilia-de-lora | kadirnar | "2025-04-05T12:12:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:canopylabs/orpheus-3b-0.1-pretrained",
"base_model:finetune:canopylabs/orpheus-3b-0.1-pretrained",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-05T12:11:57Z" | ---
base_model: canopylabs/orpheus-tts-0.1-pretrained
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kadirnar
- **License:** apache-2.0
- **Finetuned from model :** canopylabs/orpheus-tts-0.1-pretrained
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ostris/Flex.1-alpha | ostris | "2025-01-19T03:23:32Z" | 22,414 | 338 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:apache-2.0",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
] | text-to-image | "2025-01-18T21:59:00Z" | ---
license: apache-2.0
library_name: diffusers
pipeline_tag: text-to-image
---
# Flex.1-alpha
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/Flex.1-alpha.jpg?resize=1024%2C573&ssl=1" style="max-width: 100%; height: auto;">
## Description
Flex.1 alpha is a pre-trained base 8 billion parameter rectified flow transformer capable of generating images from text descriptions. It has a similar architecture to [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev), but with fewer double transformer blocks (8 vs 19). It began as a finetune of [FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell) which allows the model to retain the Apache 2.0 license. A guidance embedder has been trained for it so that it no longer requires CFG to generate images.
## Model Specs
- 8 billion parameters
- Guidance embedder
- True CFG capable
- Fine tunable
- OSI compliant license (Apache 2.0)
- 512 token length input
## Support Needed
I am just a solo Machine Learning Engineer doing this in my free time with my own money because I truly believe in open source models. I have already spent a significant amount of time and money to get this model to where it is. But to get this model where I want it to be, I need to continue to dump a significant amount of time and money into it, well beyond what I am financially capable of doing on my own. I have set up a Patreon for those individuals and organizations that want to financially support this project. I plan to also allow support in other ways soon for those that prefer to get their hands dirty.
<a href="https://www.patreon.com/c/ostris" target="_blank"><img style="width: 300px; max-width: 100%;" src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/support-me-on-patreon.png?w=1080&ssl=1" title=""></a>
## Usage
The model can be used almost identically to [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) and will work out of the box with most inference engines that support that. (Diffusers, ComfyUI etc.)
For ComfyUI, there is an all in one file called `Flex.1-alpha.safetensors`. Put this in your checkpoints folder and use like you would [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
More detailed instructions coming soon.
## History
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/openflux_is_now_flex1.jpg?resize=1024%2C328&ssl=1" style="max-width: 100%; height: auto;">
Flex.1 started as the [FLUX.1-schnell-training-adapter](https://huggingface.co/ostris/FLUX.1-schnell-training-adapter) to make training LoRAs on [FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell) possible. The original goal was to train a LoRA that can be activated during training to allow for fine tuning on the step compressed model. I merged this adapter into [FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell) and continued to train it on images generated by the [FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell) model to further break down the compression, without injecting any new data, with the goal of making a stand-alone base model. This became [OpenFLUX.1](https://huggingface.co/ostris/OpenFLUX.1), which was continuously trained for months, resulting in 10 version releases. After the final release of [OpenFLUX.1](https://huggingface.co/ostris/OpenFLUX.1), I began training the model on new data and began experimenting with pruning. I ended up with pruned versions of [OpenFLUX.1](https://huggingface.co/ostris/OpenFLUX.1) that were 7B, and 4B parameters (unreleased). Around this time, [flux.1-lite-8B-alpha](https://huggingface.co/Freepik/flux.1-lite-8B-alpha) was released and produced very good results. I decided to follow their pruning strategy and ended up with a 8B parameter version. I continued to train the model, adding new datasets and doing various experimental training tricks to improve the quality of the model.
At this point, the model still required CFG in order to generate images. I decided the model needed a guidance embedder similar to [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev), but I wanted it to be bypassable to make the model more flexible and trainable so I trained a new guidance embedder for the model independently of the model weights so that it behaves like an optional adapter leaving the model capable of being trained and inferenced without it.
## Fine Tuning
Flex.1 is designed to be fine tunable. It will finetune very similar to [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev), with the exception of the guidance embedder. With [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev), it is best to fine tune with a guidance of 1. However, With Flex.1, it is best to fine tune with the guidance embedder completely bypassed.
Day 1 LoRA training support is in [AI-Toolkit](https://github.com/ostris/ai-toolkit). You can use the [example config](https://github.com/ostris/ai-toolkit/blob/main/config/examples/train_lora_flex_24gb.yaml) to get started.
## Special Thanks
A special thanks to the following people/organizations, but also the entire ML community and countless researchers.
- Black Forest Labs
- Glif
- Lodestone Rock
- RunDiffusion
- Freepik
- Countless others…
## Samples
<div style="display: grid; grid-template-columns: repeat(3, 1fr); gap: 10px; padding: 10px;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737161331089_10.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737167425163_314.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737162955051_73.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737161516524_20.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737162268769_36.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737167721907_330.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737170374288_473.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737169910530_448.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737163845287_121.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737169224246_411.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737164550064_159.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737167870244_338.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737167777539_333.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737167276694_306.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737166720218_276.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737166571862_268.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737166219459_249.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737165978328_236.jpg?resize=1024%2C1024&ssl=1" alt="" class="wp-image-362">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737162287328_37.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737170411384_475.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737173749749_655.jpg?resize=1024%2C1024&ssl=1" alt="" class="wp-image-363">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737165199316_194.jpg?resize=1024%2C1024&ssl=1" alt="" class="wp-image-364">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737175437577_746.jpg?resize=1024%2C1024&ssl=1" alt="" class="wp-image-367">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737165681542_220.jpg?resize=1024%2C1024&ssl=1" alt="" class="wp-image-366">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737176235170_789.jpg?resize=1024%2C1024&ssl=1" alt="" class="wp-image-365">
<img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737176494913_803.jpg?w=1080&ssl=1" alt="" class="wp-image-368">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737163047768_78.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737163437281_99.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737163455823_100.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737163604175_108.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737164123452_136.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737164308937_146.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737164383098_150.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737164494404_156.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737164791299_172-1.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737164995268_183.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737165032362_185.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737165050909_186.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737165143688_191.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737165217859_195.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737165273515_198.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737165848491_229.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737165941254_234.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737165996864_237.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737166089601_242.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737166126719_244.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737166163822_246.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737166219459_249-1.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737166497703_264.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737166571862_268-1.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737166627520_271.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737166831496_282.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737167183948_301.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737167295246_307.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737167332374_309.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737180946458_1043.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737181039197_1048.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737181057727_1049.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737181113354_1052.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737181298875_1062.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737181354574_1065.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737181725470_1085.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737181985100_1099.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737182170614_1109.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737182226228_1112.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737182430216_1123.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737182578574_1131.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737182652829_1135.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737182931126_1150.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737183098007_1159.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737179054469_941.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737179073003_942.jpg?resize=1024%2C1024&ssl=1" alt="" style="height: auto;">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737179221353_950.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737179499609_965.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737179666577_974.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737179685132_975.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737179796399_981.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737179814957_982.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737179870623_985.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737179889154_986.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737180037602_994.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737180408645_1014.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737180686818_1029.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737180723917_1031.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737180779553_1034.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737183302016_1170.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737183450439_1178.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737183598775_1186.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737184043920_1210.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737171598294_539.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737171913517_556.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737172098958_566.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737172432942_584.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737172488621_587.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737172655584_596.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737173082090_619.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737173137760_622.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737175252098_736.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737175344844_741.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737175437577_746-1.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737175734354_762.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737175919837_772-1.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737176160989_785.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737176253750_790.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737167425163_314-1.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737167684810_328.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737167721907_330-1.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737167777539_333-1.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737167870244_338-1.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737167944447_342.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737167981529_344.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737168222672_357.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737168426734_368.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737168500911_372.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737168519451_373.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737168556564_375.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737168723492_384.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737169020300_400.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737169057364_402.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737176736015_816.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737177069798_834.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737177181116_840.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737177199679_841.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737177310958_847.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737177329512_848.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737177589241_862.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737177607803_863.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737177626365_864.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737177663479_866.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737177904624_879.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737178127219_891.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737178275633_899.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737178294169_900.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737178627958_918.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737169131553_406.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737169187189_409.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737169224246_411-1.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737169261354_413.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737169354073_418.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737169391172_420.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737169483883_425.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737169595148_431.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737170300050_469.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737170782274_495.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737171079014_511.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737171097571_512.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
<img src="https://i0.wp.com/ostris.com/wp-content/uploads/2025/01/1737171264485_521.jpg?resize=1024%2C1024&ssl=1" alt="" style="width: 100%; height:auto">
</div> |
netcat420/MFANN3bV0.8.10 | netcat420 | "2024-05-12T06:07:49Z" | 141 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:liminerity/Phigments12",
"base_model:merge:liminerity/Phigments12",
"base_model:netcat420/MFANN3bv0.6",
"base_model:merge:netcat420/MFANN3bv0.6",
"base_model:netcat420/MFANN3bv0.7.10",
"base_model:merge:netcat420/MFANN3bv0.7.10",
"base_model:netcat420/MFANN3bv0.8",
"base_model:merge:netcat420/MFANN3bv0.8",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-12T04:44:04Z" | ---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- netcat420/MFANN3bv0.6
- liminerity/Phigments12
- netcat420/MFANN3bv0.7.10
- netcat420/MFANN3bv0.8
---
# MFANNv0.8.10
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [liminerity/Phigments12](https://huggingface.co/liminerity/Phigments12) as a base.
### Models Merged
The following models were included in the merge:
* [netcat420/MFANN3bv0.6](https://huggingface.co/netcat420/MFANN3bv0.6)
* [netcat420/MFANN3bv0.7.10](https://huggingface.co/netcat420/MFANN3bv0.7.10)
* [netcat420/MFANN3bv0.8](https://huggingface.co/netcat420/MFANN3bv0.8)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: netcat420/MFANN3bv0.8
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: netcat420/MFANN3bv0.6
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: netcat420/MFANN3bv0.7.10
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
merge_method: ties
base_model: liminerity/Phigments12
parameters:
normalize: true
int8_mask: true
dtype: float16
``` |
kk-aivio/663497de-ec5c-4b94-bacd-cc4f841f0af3 | kk-aivio | "2025-01-19T02:24:17Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:adapter:Qwen/Qwen1.5-0.5B-Chat",
"license:other",
"region:us"
] | null | "2025-01-19T02:22:20Z" | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 663497de-ec5c-4b94-bacd-cc4f841f0af3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B-Chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c24f8b8bdecb5dad_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c24f8b8bdecb5dad_train_data.json
type:
field_input: opinion
field_instruction: citation
field_output: syllabus
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/663497de-ec5c-4b94-bacd-cc4f841f0af3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/c24f8b8bdecb5dad_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1dfbe7d7-462e-4e6f-b93d-77ff87fd0e6b
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1dfbe7d7-462e-4e6f-b93d-77ff87fd0e6b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 663497de-ec5c-4b94-bacd-cc4f841f0af3
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4776 | 0.0003 | 1 | 3.1475 |
| 3.1294 | 0.0009 | 3 | 3.1224 |
| 3.0702 | 0.0019 | 6 | 2.9718 |
| 2.8696 | 0.0028 | 9 | 2.9110 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ntc-ai/SDXL-LoRA-slider.time-lapse-photography | ntc-ai | "2024-01-08T23:12:45Z" | 4 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | "2024-01-08T23:12:42Z" |
---
language:
- en
thumbnail: "images/evaluate/time lapse photography.../time lapse photography_17_3.0.png"
widget:
- text: time lapse photography
output:
url: images/time lapse photography_17_3.0.png
- text: time lapse photography
output:
url: images/time lapse photography_19_3.0.png
- text: time lapse photography
output:
url: images/time lapse photography_20_3.0.png
- text: time lapse photography
output:
url: images/time lapse photography_21_3.0.png
- text: time lapse photography
output:
url: images/time lapse photography_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "time lapse photography"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - time lapse photography (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/time lapse photography_17_-3.0.png" width=256 height=256 /> | <img src="images/time lapse photography_17_0.0.png" width=256 height=256 /> | <img src="images/time lapse photography_17_3.0.png" width=256 height=256 /> |
| <img src="images/time lapse photography_19_-3.0.png" width=256 height=256 /> | <img src="images/time lapse photography_19_0.0.png" width=256 height=256 /> | <img src="images/time lapse photography_19_3.0.png" width=256 height=256 /> |
| <img src="images/time lapse photography_20_-3.0.png" width=256 height=256 /> | <img src="images/time lapse photography_20_0.0.png" width=256 height=256 /> | <img src="images/time lapse photography_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
time lapse photography
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.time-lapse-photography', weight_name='time lapse photography.safetensors', adapter_name="time lapse photography")
# Activate the LoRA
pipe.set_adapters(["time lapse photography"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, time lapse photography"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 950+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
PKU-Alignment/ProgressGym-HistLlama3-8B-C017-pretrain-v0.2 | PKU-Alignment | "2024-08-10T03:49:59Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment",
"value alignment",
"AI safety",
"safety",
"LLM",
"history",
"conversational",
"dataset:PKU-Alignment/ProgressGym-HistText",
"arxiv:2406.20087",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-12T17:19:55Z" | ---
license: cc-by-4.0
tags:
- alignment
- value alignment
- AI safety
- safety
- LLM
- history
datasets:
- PKU-Alignment/ProgressGym-HistText
base_model:
- meta-llama/Meta-Llama-3-8B
---
# ProgressGym-HistLlama3-8B-C017-pretrain
## Overview
#### The ProgressGym Framework

**ProgressGym-HistLlama3-8B-C017-pretrain** is part of the **ProgressGym** framework for research and experimentation on *progress alignment* - the emulation of moral progress in AI alignment algorithms, as a measure to prevent risks of societal value lock-in.
To quote the paper [*ProgressGym: Alignment with a Millennium of Moral Progress*](https://arxiv.org/abs/2406.20087):
> Frontier AI systems, including large language models (LLMs), hold increasing influence over the epistemology of human users. Such influence can reinforce prevailing societal values, potentially contributing to the lock-in of misguided moral beliefs and, consequently, the perpetuation of problematic moral practices on a broad scale.
>
> We introduce *progress alignment* as a technical solution to mitigate this imminent risk. Progress alignment algorithms learn to emulate the mechanics of human moral progress, thereby addressing the susceptibility of existing alignment methods to contemporary moral blindspots.
#### ProgressGym-HistLlama3-8B-C017-pretrain
ProgressGym-HistLlama3-8B-C017-pretrain is one of the **36 historical language models** in the ProgressGym framework. It is a pretrained model without instruction-tuning. For the instruction-tuned version, see [ProgressGym-HistLlama3-8B-C017-instruct](https://huggingface.co/PKU-Alignment/ProgressGym-HistLlama3-8B-C017-instruct).
**ProgressGym-HistLlama3-8B-C017-pretrain is under continual iteration.** Improving upon the current version, new versions of the model are currently being trained to reflect historical moral tendencies in ever more comprehensive ways.
**ProgressGym-HistLlama3-8B-C017-pretrain is a 17th-century historical language model.** Based on [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B), It is continued-pretrained on the 17th-century text data from [ProgressGym-HistText](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-HistText), using the following hyperparameters:
- learning_rate: 1.5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 20
- num_epochs: 4.0
- mixed_precision_training: Native AMP
... with the following training results:
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5442 | 0.2028 | 200 | 2.5552 |
| 2.5376 | 0.4057 | 400 | 2.5096 |
| 2.4487 | 0.6085 | 600 | 2.4831 |
| 2.5324 | 0.8114 | 800 | 2.4690 |
| 2.265 | 1.0142 | 1000 | 2.4733 |
| 2.3002 | 1.2170 | 1200 | 2.4736 |
| 2.29 | 1.4199 | 1400 | 2.4734 |
| 2.2566 | 1.6227 | 1600 | 2.4725 |
| 2.3052 | 1.8256 | 1800 | 2.4721 |
| 2.2702 | 2.0284 | 2000 | 2.4734 |
| 2.2411 | 2.2312 | 2200 | 2.4746 |
| 2.2413 | 2.4341 | 2400 | 2.4749 |
| 2.216 | 2.6369 | 2600 | 2.4749 |
| 2.2696 | 2.8398 | 2800 | 2.4747 |
| 2.2455 | 3.0426 | 3000 | 2.4752 |
| 2.216 | 3.2454 | 3200 | 2.4753 |
| 2.2348 | 3.4483 | 3400 | 2.4757 |
| 2.238 | 3.6511 | 3600 | 2.4753 |
| 2.2349 | 3.8540 | 3800 | 2.4752 |
Note that the training data volume for the continued pretraining stage is capped at 3GB. When the corresponding century's corpus exceeds this volume, the training data is randomly sampled to fit the volume.
## Links
- **[Paper Preprint]** [ProgressGym: Alignment with a Millennium of Moral Progress](https://arxiv.org/abs/2406.20087)
- **[Leaderboard & Interactive Playground]** [PKU-Alignment/ProgressGym-LeaderBoard](https://huggingface.co/spaces/PKU-Alignment/ProgressGym-LeaderBoard)
- **[Huggingface Data & Model Collection]** [PKU-Alignment/ProgressGym](https://huggingface.co/collections/PKU-Alignment/progressgym-666735fcf3e4efa276226eaa)
- **[Github Codebase]** [PKU-Alignment/ProgressGym](https://github.com/PKU-Alignment/ProgressGym)
- **[Documentation]** [ProgressGym Documentation](https://pku-alignment.github.io/ProgressGym/)
- **[PyPI Package]** *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)*
## Citation
If the datasets, models, or framework of ProgressGym help you in your project, please cite ProgressGym using the bibtex entry below.
```text
@article{progressgym,
title={ProgressGym: Alignment with a Millennium of Moral Progress},
author={Tianyi Qiu and Yang Zhang and Xuchuan Huang and Jasmine Xinze Li and Jiaming Ji and Yaodong Yang},
journal={arXiv preprint arXiv:2406.20087},
eprint={2406.20087},
eprinttype = {arXiv},
year={2024}
}
```
## Ethics Statement
- **Copyright information of historical text data sources**:
- Project Gutenberg, one among our four source of our historical text data, consists only of texts in the public domain.
- For the text that we draw from Internet Archive, we only include those that uploaded by *Library of Congress*, which are texts freely released online by the U.S. Library of Congress for research and public use.
- The text data from Early English Books Online are, according to their publisher, "freely available to the public" and "available for access, distribution, use, or reuse by anyone".
- The last remaining source of our historical text data, the Pile of Law dataset, is released under a Creative Commons license, which we adhere to in our use.
- **Reproducibility**: To ensure reproducibility, we open-source all the code involved in the production of our main results (including the entire pipeline starting from data collection and model training), as well as the supporting infrastructure (the ProgressGym framework), making replication as easy as running a few simple script files.
- **Misuse Prevention**: In order to prevent potential misuse of progress alignment algorithms, we have carefully formulated progress alignment as strictly value-neutral, without *a priori* assumptions on the direction of progress. In the event of potential misuse of our dataset, we condemn any misuse attempt to the strongest degree possible, and will work with the research community on whistleblowing for such attempts.
- **Open-Sourcing**: We confirm that our code, data, and models are to be open-sourced under a CC-BY 4.0 license. We will continue to maintain and update our open-source repositories and models. |
mradermacher/Kyro-n1.1-3B-i1-GGUF | mradermacher | "2025-03-17T03:28:25Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"reasoning",
"kyro",
"open-neo",
"open-source",
"deepseek-r1",
"en",
"zh",
"fr",
"es",
"pt",
"de",
"it",
"ru",
"ja",
"ko",
"vi",
"th",
"ar",
"fa",
"he",
"tr",
"cs",
"pl",
"hi",
"bn",
"ur",
"id",
"ms",
"lo",
"my",
"ceb",
"km",
"tl",
"nl",
"base_model:open-neo/Kyro-n1.1-3B",
"base_model:quantized:open-neo/Kyro-n1.1-3B",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-03-16T21:16:54Z" | ---
base_model: open-neo/Kyro-n1.1-3B
language:
- en
- zh
- fr
- es
- pt
- de
- it
- ru
- ja
- ko
- vi
- th
- ar
- fa
- he
- tr
- cs
- pl
- hi
- bn
- ur
- id
- ms
- lo
- my
- ceb
- km
- tl
- nl
library_name: transformers
license: other
license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
license_name: qwen-research
quantized_by: mradermacher
tags:
- reasoning
- kyro
- open-neo
- open-source
- deepseek-r1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/open-neo/Kyro-n1.1-3B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Kyro-n1.1-3B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-Q2_K.gguf) | i1-Q2_K | 1.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-Q4_0.gguf) | i1-Q4_0 | 1.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kyro-n1.1-3B-i1-GGUF/resolve/main/Kyro-n1.1-3B.i1-Q6_K.gguf) | i1-Q6_K | 2.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ChemFM/ChemFM-1B | ChemFM | "2024-10-14T19:43:35Z" | 393 | 1 | null | [
"safetensors",
"llama",
"chemistry",
"molecules",
"SMILES",
"UniChem",
"ChemicalFoundationModel",
"dataset:UniChem",
"license:mit",
"region:us"
] | null | "2024-10-14T17:53:24Z" | ---
datasets:
- UniChem
license: mit
tags:
- chemistry
- molecules
- SMILES
- UniChem
- ChemicalFoundationModel
---
|
navin-kumar-j/whisper-base-ta | navin-kumar-j | "2025-04-02T17:13:43Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ta",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-04-02T10:37:55Z" | ---
library_name: transformers
language:
- ta
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: Whisper Base Ta - Navin Kumar J
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: ta
split: None
args: 'config: ta, split: test'
metrics:
- name: Wer
type: wer
value: 54.641807706619794
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Ta - Navin Kumar J
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2913
- Wer: 54.6418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.2192 | 0.2773 | 1000 | 0.3592 | 62.3484 |
| 0.2075 | 0.5546 | 2000 | 0.3165 | 57.5738 |
| 0.1881 | 0.8319 | 3000 | 0.2993 | 55.5657 |
| 0.1504 | 1.1093 | 4000 | 0.2913 | 54.6418 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
gmojko/Reinforce-CartPole2 | gmojko | "2023-01-18T21:33:54Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-18T21:33:45Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
yacine-djm/fg-bert-sustainability-2e-5-0.01-32-20_augmented_60_percent_empty_2 | yacine-djm | "2023-07-19T13:50:57Z" | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-07-19T13:23:07Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: fg-bert-sustainability-2e-5-0.01-32-20_augmented_60_percent_empty_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fg-bert-sustainability-2e-5-0.01-32-20_augmented_60_percent_empty_2
This model is a fine-tuned version of [Raccourci/fairguest-bert](https://huggingface.co/Raccourci/fairguest-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0304
- F1: 0.9166
- Roc Auc: 0.9580
- Accuracy: 0.9460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 63 | 0.1679 | 0.0 | 0.5 | 0.5991 |
| No log | 1.99 | 126 | 0.0858 | 0.6802 | 0.7690 | 0.8048 |
| No log | 2.99 | 189 | 0.0525 | 0.8974 | 0.9423 | 0.9361 |
| No log | 4.0 | 253 | 0.0415 | 0.9041 | 0.9470 | 0.9395 |
| No log | 5.0 | 316 | 0.0381 | 0.9023 | 0.9479 | 0.9381 |
| No log | 5.99 | 379 | 0.0345 | 0.9082 | 0.9466 | 0.9420 |
| No log | 6.99 | 442 | 0.0321 | 0.9155 | 0.9546 | 0.9465 |
| 0.0888 | 8.0 | 506 | 0.0319 | 0.9106 | 0.9560 | 0.9415 |
| 0.0888 | 9.0 | 569 | 0.0313 | 0.9123 | 0.9572 | 0.9420 |
| 0.0888 | 9.96 | 630 | 0.0304 | 0.9166 | 0.9580 | 0.9460 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
fermaat/poca-SoccerTwos | fermaat | "2023-02-14T09:22:24Z" | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2023-02-14T09:22:19Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: fermaat/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AaryanK/tensorboard_logs | AaryanK | "2025-02-15T18:07:16Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-02-15T10:18:45Z" | ---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: tensorboard_logs
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for tensorboard_logs
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AaryanK/tensorboard_logs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.47.1
- Pytorch: 2.6.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Subsets and Splits