modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-03-31 18:27:03
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 405
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-03-31 18:26:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
VishwanathanR/resnet-50 | VishwanathanR | "2022-12-01T11:42:15Z" | 90 | 0 | transformers | [
"transformers",
"tf",
"resnet",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-12-01T10:41:15Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: resnet-50
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# resnet-50
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.6.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Allenpai/rec-alpaca | Allenpai | "2023-07-02T11:40:30Z" | 2 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-07-02T11:11:42Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0 |
MrPotato/ner-bert-multilingual-uncased-geocite | MrPotato | "2025-02-05T20:00:21Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-02-04T22:51:39Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: bert-base-multilingual-uncased
model-index:
- name: ner-bert-multilingual-uncased-geocite
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-bert-multilingual-uncased-geocite
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
TuringsSolutions/q-FrozenLake-v1-4x4-noSlippery | TuringsSolutions | "2024-03-24T04:40:50Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-03-24T04:40:47Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="TuringsSolutions/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MaziyarPanahi/MeliodasPercival_01_Experiment28Experiment29 | MaziyarPanahi | "2024-04-09T23:26:55Z" | 19 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"base_model:automerger/Experiment28Experiment29-7B",
"base_model:merge:automerger/Experiment28Experiment29-7B",
"base_model:automerger/MeliodasPercival_01-7B",
"base_model:merge:automerger/MeliodasPercival_01-7B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-04-09T23:13:10Z" | ---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: MeliodasPercival_01_Experiment28Experiment29
base_model:
- automerger/MeliodasPercival_01-7B
- automerger/Experiment28Experiment29-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# MeliodasPercival_01_Experiment28Experiment29
MeliodasPercival_01_Experiment28Experiment29 is a merge of the following models:
* [automerger/MeliodasPercival_01-7B](https://huggingface.co/automerger/MeliodasPercival_01-7B)
* [automerger/Experiment28Experiment29-7B](https://huggingface.co/automerger/Experiment28Experiment29-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/MeliodasPercival_01_Experiment28Experiment29"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
lgk03/WITHINAPPS_NDD-claroline_test-content-CWAdj | lgk03 | "2024-07-16T08:11:37Z" | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-16T07:39:37Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: WITHINAPPS_NDD-claroline_test-content-CWAdj
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# WITHINAPPS_NDD-claroline_test-content-CWAdj
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0034
- Accuracy: 0.9992
- F1: 0.9992
- Precision: 0.9992
- Recall: 0.9992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 0.9978 | 111 | 0.0129 | 0.9983 | 0.9983 | 0.9983 | 0.9983 |
| No log | 1.9955 | 222 | 0.0062 | 0.9992 | 0.9992 | 0.9992 | 0.9992 |
| No log | 2.9933 | 333 | 0.0050 | 0.9989 | 0.9989 | 0.9989 | 0.9989 |
| No log | 4.0 | 445 | 0.0031 | 0.9997 | 0.9997 | 0.9997 | 0.9997 |
| 0.0157 | 4.9888 | 555 | 0.0034 | 0.9992 | 0.9992 | 0.9992 | 0.9992 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
albertus-sussex/veriscrape-sbert-movie-reference_9_to_verify_1-fold-5 | albertus-sussex | "2025-03-30T20:56:20Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:15855",
"loss:TripletLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:Alibaba-NLP/gte-base-en-v1.5",
"base_model:finetune:Alibaba-NLP/gte-base-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-03-30T20:55:47Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
randiapoorva/Nikita-Panigrahi-Flux-V2 | randiapoorva | "2024-09-16T23:49:46Z" | 5 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-09-16T22:59:25Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Nikita Panigrahi
---
# Nikita Panigrahi Flux V2
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Nikita Panigrahi` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('randiapoorva/Nikita-Panigrahi-Flux-V2', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
reaverlee/distilbert-base-uncased-finetuned-emotion | reaverlee | "2022-12-19T21:32:26Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-12-06T20:01:28Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3181
- eval_accuracy: 0.902
- eval_f1: 0.8982
- eval_runtime: 1.1006
- eval_samples_per_second: 1817.111
- eval_steps_per_second: 29.074
- epoch: 1.0
- step: 250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0
- Datasets 2.7.1
- Tokenizers 0.12.1
|
lesso/96040ed1-51b8-4643-871d-b541b5552394 | lesso | "2025-02-03T15:49:55Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2025-02-03T15:26:03Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 96040ed1-51b8-4643-871d-b541b5552394
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.3
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 589fe59dca0f3dbe_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/589fe59dca0f3dbe_train_data.json
type:
field_instruction: prompt
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/96040ed1-51b8-4643-871d-b541b5552394
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000101
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/god13/589fe59dca0f3dbe_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 71879dd7-6005-4d28-8cab-23fdd2df3703
wandb_project: ab-god13
wandb_run: your_name
wandb_runid: 71879dd7-6005-4d28-8cab-23fdd2df3703
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 96040ed1-51b8-4643-871d-b541b5552394
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000101
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6998 | 0.0032 | 1 | 0.6837 |
| 2.2354 | 0.1625 | 50 | 0.5304 |
| 2.2099 | 0.3249 | 100 | 0.5084 |
| 2.1265 | 0.4874 | 150 | 0.5009 |
| 2.0551 | 0.6499 | 200 | 0.4971 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subhashree-Sahu-XX/wATCH.Subhashree-Sahu.Subhashree-Sahu.video.original | Subhashree-Sahu-XX | "2025-02-21T23:50:26Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-21T23:50:19Z" | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)](https://lekedvideo.xyz/watch/)
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://lekedvideo.xyz/watch/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://lekedvideo.xyz/watch/) |
HASAN55/distilbert_squad_384 | HASAN55 | "2023-06-01T20:29:37Z" | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-06-01T17:19:45Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: HASAN55/distilbert_squad_384
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# HASAN55/distilbert_squad_384
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7649
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16596, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.5285 | 0 |
| 0.9679 | 1 |
| 0.7649 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Jainil30/wav2vec2-base-csa-10-rev3 | Jainil30 | "2022-01-12T14:55:33Z" | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:04Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-csa-10-rev3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-csa-10-rev3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5869
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 18.7934 | 25.0 | 200 | 3.5869 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
aleegis12/682b277f-231c-45b6-a656-f08dc9d80126 | aleegis12 | "2025-01-29T04:36:29Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.2",
"base_model:adapter:unsloth/mistral-7b-v0.2",
"license:apache-2.0",
"region:us"
] | null | "2025-01-29T04:10:18Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 682b277f-231c-45b6-a656-f08dc9d80126
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.2
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 400230338a45b747_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/400230338a45b747_train_data.json
type:
field_instruction: intent
field_output: snippet
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis12/682b277f-231c-45b6-a656-f08dc9d80126
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/400230338a45b747_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a5f31f21-6edc-4064-a734-058a53dba9cc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a5f31f21-6edc-4064-a734-058a53dba9cc
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 682b277f-231c-45b6-a656-f08dc9d80126
This model is a fine-tuned version of [unsloth/mistral-7b-v0.2](https://huggingface.co/unsloth/mistral-7b-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.8296 | 0.0125 | 1 | 2.4410 |
| 4.5613 | 0.6270 | 50 | 1.0685 |
| 3.3615 | 1.2602 | 100 | 1.1451 |
| 2.0365 | 1.8871 | 150 | 1.0374 |
| 1.4185 | 2.5204 | 200 | 1.1715 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
DBangshu/V4_Base_GPT2_e5_8_6 | DBangshu | "2024-11-29T16:07:37Z" | 81 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-29T16:07:23Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tensorblock/gpt2-alpaca-GGUF | tensorblock | "2024-11-16T00:56:24Z" | 81 | 0 | null | [
"gguf",
"instruct-tuning",
"gpt2",
"alpaca",
"TensorBlock",
"GGUF",
"en",
"dataset:tatsu-lab/alpaca",
"base_model:vicgalle/gpt2-alpaca",
"base_model:quantized:vicgalle/gpt2-alpaca",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-11-10T10:12:43Z" | ---
license: mit
datasets:
- tatsu-lab/alpaca
language:
- en
tags:
- instruct-tuning
- gpt2
- alpaca
- TensorBlock
- GGUF
base_model: vicgalle/gpt2-alpaca
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## vicgalle/gpt2-alpaca - GGUF
This repo contains GGUF format model files for [vicgalle/gpt2-alpaca](https://huggingface.co/vicgalle/gpt2-alpaca).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [gpt2-alpaca-Q2_K.gguf](https://huggingface.co/tensorblock/gpt2-alpaca-GGUF/blob/main/gpt2-alpaca-Q2_K.gguf) | Q2_K | 0.076 GB | smallest, significant quality loss - not recommended for most purposes |
| [gpt2-alpaca-Q3_K_S.gguf](https://huggingface.co/tensorblock/gpt2-alpaca-GGUF/blob/main/gpt2-alpaca-Q3_K_S.gguf) | Q3_K_S | 0.084 GB | very small, high quality loss |
| [gpt2-alpaca-Q3_K_M.gguf](https://huggingface.co/tensorblock/gpt2-alpaca-GGUF/blob/main/gpt2-alpaca-Q3_K_M.gguf) | Q3_K_M | 0.091 GB | very small, high quality loss |
| [gpt2-alpaca-Q3_K_L.gguf](https://huggingface.co/tensorblock/gpt2-alpaca-GGUF/blob/main/gpt2-alpaca-Q3_K_L.gguf) | Q3_K_L | 0.095 GB | small, substantial quality loss |
| [gpt2-alpaca-Q4_0.gguf](https://huggingface.co/tensorblock/gpt2-alpaca-GGUF/blob/main/gpt2-alpaca-Q4_0.gguf) | Q4_0 | 0.099 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [gpt2-alpaca-Q4_K_S.gguf](https://huggingface.co/tensorblock/gpt2-alpaca-GGUF/blob/main/gpt2-alpaca-Q4_K_S.gguf) | Q4_K_S | 0.100 GB | small, greater quality loss |
| [gpt2-alpaca-Q4_K_M.gguf](https://huggingface.co/tensorblock/gpt2-alpaca-GGUF/blob/main/gpt2-alpaca-Q4_K_M.gguf) | Q4_K_M | 0.105 GB | medium, balanced quality - recommended |
| [gpt2-alpaca-Q5_0.gguf](https://huggingface.co/tensorblock/gpt2-alpaca-GGUF/blob/main/gpt2-alpaca-Q5_0.gguf) | Q5_0 | 0.114 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [gpt2-alpaca-Q5_K_S.gguf](https://huggingface.co/tensorblock/gpt2-alpaca-GGUF/blob/main/gpt2-alpaca-Q5_K_S.gguf) | Q5_K_S | 0.114 GB | large, low quality loss - recommended |
| [gpt2-alpaca-Q5_K_M.gguf](https://huggingface.co/tensorblock/gpt2-alpaca-GGUF/blob/main/gpt2-alpaca-Q5_K_M.gguf) | Q5_K_M | 0.118 GB | large, very low quality loss - recommended |
| [gpt2-alpaca-Q6_K.gguf](https://huggingface.co/tensorblock/gpt2-alpaca-GGUF/blob/main/gpt2-alpaca-Q6_K.gguf) | Q6_K | 0.129 GB | very large, extremely low quality loss |
| [gpt2-alpaca-Q8_0.gguf](https://huggingface.co/tensorblock/gpt2-alpaca-GGUF/blob/main/gpt2-alpaca-Q8_0.gguf) | Q8_0 | 0.165 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/gpt2-alpaca-GGUF --include "gpt2-alpaca-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/gpt2-alpaca-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
mradermacher/Savant-LLaMa-70B-GGUF | mradermacher | "2025-02-22T10:05:12Z" | 267 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TareksGraveyard/Savant-LLaMa-70B",
"base_model:quantized:TareksGraveyard/Savant-LLaMa-70B",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-24T15:53:58Z" | ---
base_model: TareksGraveyard/Savant-LLaMa-70B
language:
- en
library_name: transformers
license: llama3.3
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TareksGraveyard/Savant-LLaMa-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Savant-LLaMa-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Savant-LLaMa-70B-GGUF/resolve/main/Savant-LLaMa-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Savant-LLaMa-70B-GGUF/resolve/main/Savant-LLaMa-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Savant-LLaMa-70B-GGUF/resolve/main/Savant-LLaMa-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Savant-LLaMa-70B-GGUF/resolve/main/Savant-LLaMa-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Savant-LLaMa-70B-GGUF/resolve/main/Savant-LLaMa-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Savant-LLaMa-70B-GGUF/resolve/main/Savant-LLaMa-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Savant-LLaMa-70B-GGUF/resolve/main/Savant-LLaMa-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Savant-LLaMa-70B-GGUF/resolve/main/Savant-LLaMa-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Savant-LLaMa-70B-GGUF/resolve/main/Savant-LLaMa-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Savant-LLaMa-70B-GGUF/resolve/main/Savant-LLaMa-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Savant-LLaMa-70B-GGUF/resolve/main/Savant-LLaMa-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Savant-LLaMa-70B-GGUF/resolve/main/Savant-LLaMa-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Savant-LLaMa-70B-GGUF/resolve/main/Savant-LLaMa-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
TongZheng1999/gemma-2-9b-it-mix-v2-111 | TongZheng1999 | "2025-03-14T08:20:40Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"sft",
"conversational",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-14T08:03:08Z" | ---
base_model: google/gemma-2-9b-it
library_name: transformers
model_name: gemma-2-9b-it-mix-v2-111
tags:
- generated_from_trainer
- alignment-handbook
- trl
- sft
licence: license
---
# Model Card for gemma-2-9b-it-mix-v2-111
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="TongZheng1999/gemma-2-9b-it-mix-v2-111", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kidzheng/huggingface/runs/zr0ncpva)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.0
- Pytorch: 2.6.0
- Datasets: 3.3.1
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Rasooli/emotion-analysis-with-distilbert | Rasooli | "2024-07-07T09:43:16Z" | 7 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-07T09:17:34Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Rasooli/emotion-analysis-with-distilbert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rasooli/emotion-analysis-with-distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1450
- Validation Loss: 0.1363
- Train Accuracy: 0.936
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.3948 | 0.1717 | 0.932 | 0 |
| 0.1450 | 0.1363 | 0.936 | 1 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
tscstudios/kvj8gjldpiyswqpppnwofmig8512_2d321f26-8cb3-4a36-84d9-6dda168ea19b | tscstudios | "2025-02-20T00:33:04Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-20T00:33:03Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Kvj8Gjldpiyswqpppnwofmig8512_2D321F26 8Cb3 4A36 84D9 6Dda168Ea19B
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/kvj8gjldpiyswqpppnwofmig8512_2d321f26-8cb3-4a36-84d9-6dda168ea19b', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/Mistral-of-Realms-7b-Instruct-GGUF | mradermacher | "2025-03-30T05:19:04Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"axolotl",
"generated_from_trainer",
"en",
"base_model:Akila/Mistral-of-Realms-7b-Instruct",
"base_model:quantized:Akila/Mistral-of-Realms-7b-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-30T05:06:34Z" | ---
base_model: Akila/Mistral-of-Realms-7b-Instruct
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- axolotl
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Akila/Mistral-of-Realms-7b-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-of-Realms-7b-Instruct-GGUF/resolve/main/Mistral-of-Realms-7b-Instruct.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-of-Realms-7b-Instruct-GGUF/resolve/main/Mistral-of-Realms-7b-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-of-Realms-7b-Instruct-GGUF/resolve/main/Mistral-of-Realms-7b-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-of-Realms-7b-Instruct-GGUF/resolve/main/Mistral-of-Realms-7b-Instruct.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-of-Realms-7b-Instruct-GGUF/resolve/main/Mistral-of-Realms-7b-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-of-Realms-7b-Instruct-GGUF/resolve/main/Mistral-of-Realms-7b-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-of-Realms-7b-Instruct-GGUF/resolve/main/Mistral-of-Realms-7b-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-of-Realms-7b-Instruct-GGUF/resolve/main/Mistral-of-Realms-7b-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-of-Realms-7b-Instruct-GGUF/resolve/main/Mistral-of-Realms-7b-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-of-Realms-7b-Instruct-GGUF/resolve/main/Mistral-of-Realms-7b-Instruct.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-of-Realms-7b-Instruct-GGUF/resolve/main/Mistral-of-Realms-7b-Instruct.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-of-Realms-7b-Instruct-GGUF/resolve/main/Mistral-of-Realms-7b-Instruct.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mistral-7B-Instruct-v0.2-Function-Calling-GGUF | mradermacher | "2024-12-31T16:30:56Z" | 46 | 1 | transformers | [
"transformers",
"gguf",
"function-call",
"mistral",
"en",
"base_model:InterSync/Mistral-7B-Instruct-v0.2-Function-Calling",
"base_model:quantized:InterSync/Mistral-7B-Instruct-v0.2-Function-Calling",
"license:cc",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-31T16:06:14Z" | ---
base_model: InterSync/Mistral-7B-Instruct-v0.2-Function-Calling
language:
- en
library_name: transformers
license: cc
quantized_by: mradermacher
tags:
- function-call
- mistral
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/InterSync/Mistral-7B-Instruct-v0.2-Function-Calling
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-Function-Calling-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-Function-Calling.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-Function-Calling-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-Function-Calling.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-Function-Calling-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-Function-Calling.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-Function-Calling-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-Function-Calling.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-Function-Calling-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-Function-Calling.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-Function-Calling-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-Function-Calling.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-Function-Calling-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-Function-Calling.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-Function-Calling-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-Function-Calling.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-Function-Calling-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-Function-Calling.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-Function-Calling-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-Function-Calling.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-Function-Calling-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-Function-Calling.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-Function-Calling-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-Function-Calling.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
PrunaAI/fxmarty-tiny-testing-falcon-alibi-bnb-4bit-smashed | PrunaAI | "2024-08-02T15:47:29Z" | 4 | 0 | pruna-engine | [
"pruna-engine",
"safetensors",
"falcon",
"custom_code",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2024-04-04T07:48:00Z" | ---
library_name: pruna-engine
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results
Detailed efficiency metrics coming soon!
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo fxmarty/tiny-testing-falcon-alibi installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/fxmarty-tiny-testing-falcon-alibi-bnb-4bit-smashed",
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("fxmarty/tiny-testing-falcon-alibi")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model fxmarty/tiny-testing-falcon-alibi before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
Smaraa/gpt2-text-simplification_1e4_adafactor | Smaraa | "2023-06-24T20:19:08Z" | 23 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-06-17T20:13:07Z" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-text-simplification_1e4_adafactor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-text-simplification_1e4_adafactor
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8976 | 1.0 | 2326 | 0.8092 |
| 0.8236 | 2.0 | 4652 | 0.7974 |
| 0.7807 | 3.0 | 6978 | 0.7940 |
| 0.7428 | 4.0 | 9304 | 0.7916 |
| 0.7094 | 5.0 | 11630 | 0.7955 |
| 0.6828 | 6.0 | 13956 | 0.7996 |
| 0.6569 | 7.0 | 16282 | 0.8046 |
| 0.6365 | 8.0 | 18608 | 0.8103 |
| 0.616 | 9.0 | 20934 | 0.8175 |
| 0.6001 | 10.0 | 23260 | 0.8212 |
| 0.5819 | 11.0 | 25586 | 0.8291 |
| 0.5688 | 12.0 | 27912 | 0.8368 |
| 0.5562 | 13.0 | 30238 | 0.8378 |
| 0.5437 | 14.0 | 32564 | 0.8462 |
| 0.5299 | 15.0 | 34890 | 0.8497 |
| 0.5242 | 16.0 | 37216 | 0.8568 |
| 0.5158 | 17.0 | 39542 | 0.8643 |
| 0.5088 | 18.0 | 41868 | 0.8676 |
| 0.5027 | 19.0 | 44194 | 0.8703 |
| 0.4989 | 20.0 | 46520 | 0.8713 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sam-mosaic/pile-of-law-tokenizer | sam-mosaic | "2022-12-22T01:04:28Z" | 0 | 1 | null | [
"region:us"
] | null | "2022-12-21T22:05:31Z" | # Pile of Law Tokenizer
This tokenizer should be a drop-in replacement for the GPT2Tokenizer. It has the same special tokens, but was trained on a random 1M samples from [the pile of law](https://huggingface.co/datasets/pile-of-law/pile-of-law) train split.
It has exactly 52,000 tokens, which is not identical to GPT2.
Usage:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sam-mosaic/pile-of-law-tokenizer")
```
|
PepperPotions/awddawdasdaw | PepperPotions | "2023-01-07T07:23:58Z" | 0 | 0 | null | [
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-01-07T07:23:32Z" | ---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- text-to-image
---
# Any(thing) Mix(es)
Mixed weeb models :)
# Models
## anything-berry-30.ckpt
[Re-uploaded from](https://huggingface.co/misobarisic/anything-berrymix)
Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name
--- | --- | --- | --- | --- | ---
1 | Weighted Sum @ 0.30 | Anything V3 | Berry Mix | n/a | **anything-berry-30**
## anything-f222-15.ckpt
[Recipe Source](https://www.reddit.com/r/WaifuDiffusion/comments/zdbs3r/comment/iz0nr48/?utm_source=reddit&utm_medium=web2x&context=3)
Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name
--- | --- | --- | --- | --- | ---
1 | Weighted Sum @ 0.15 | Anything V3 | Zeipher F222 | n/a | **anything-f222-15**
## anything-f222-15-elysiumv2-10.ckpt
[Recipe Source](https://www.reddit.com/r/WaifuDiffusion/comments/zg1d8x/comment/izei93c/?utm_source=reddit&utm_medium=web2x&context=3)
Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name
--- | --- | --- | --- | --- | ---
1 | Weighted Sum @ 0.10 | anything-f222-15 | Elysium Anime v2 | n/a | **anything-f222-15-elysiumv2-10**
## berrymix-v3.ckpt
[Recipe Source](https://rentry.org/hdgrecipes#berrymix-v3-535d98a3)
Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name
--- | --- | --- | --- | --- | ---
1 | Weighted Sum @ 0.05 | AnythingV3.0 | Stable Diffusion 1.5 | n/a | Anything Fix
2 | Add Difference @ 1 | Anything fix | Zeipher F222 | Stable Diffusion 1.5 | berrymix3 lite
3 | Weighted Sum @ 0.25 | berrymix3 lite |r34_e4 | n/a | **berrymix V3**
## blossom-extract.safetensors
[Recipe Source](https://www.reddit.com/r/StableDiffusion/comments/zk8y50/comment/izyhn8w/?utm_source=reddit&utm_medium=web2x&context=3)
Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name
--- | --- | --- | --- | --- | ---
1 | Add Difference @ 1 | Anything V3 | Zeipher F222 | Stable Diffusion 1.4 | **blossom-extract**
## hentai-elysium-50.safetensors
[Recipe Source](https://www.reddit.com/r/WaifuDiffusion/comments/zn6wdb/comment/j0fabe6/?utm_source=reddit&utm_medium=web2x&context=3)
Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name
--- | --- | --- | --- | --- | ---
1 | Weighted Sum @ 0.5 | Hentai Diffusion 17 | Elysium Anime v2 | n/a | **hentai-elysium-50**
## nutmeg-mix.ckpt
[Recipe Source](https://rentry.org/hdgrecipes#nutmegmix-aa3e502b)
Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name
--- | --- | --- | --- | --- | ---
1 | Weighted Sum @ 0.05 | NovelAI | Stable Diffusion 1.5 | n/a | nutmegmix-part1
2 | Weighted Sum @ 0.05 | nutmegmix-part1 | Zeipher F222 | n/a | nutmegmix-part2
3 | Weighted Sum @ 0.05 | nutmegmix-part2 | r34_e4 | n/a | nutmegmix-part3
4 | Weighted Sum @ 0.05 | nutmegmix-part3 | SmirkingFace | n/a | nutmegmix-part4
5 | Weighted Sum @ 0.3 | AnythingV3.0 | nutmegmix-part4 | n/a | **nutmeg-mix**
## raspberry-mix.ckpt
[Recipe Source](https://rentry.org/hdgrecipes#raspberry-mix-4d202242)
Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name
--- | --- | --- | --- | --- | ---
1 | Weighted Sum @ 0.25 | AnythingV3.0 | Stable Diffusion 1.5 | n/a | AnyV3-SD1.5
2 | Add Difference @ 1 | AnyV3-SD1.5 | Zeipher F222 | Stable Diffusion 1.4 | raspberry-lite
3 | Weighted Sum @ 0.15 | raspberry-lite | r34_e4 | n/a | **raspberry mix**
## strawberry-mix.ckpt
[Recipe Source](https://rentry.org/hdgrecipes#strawberry-mix-e043dfc5)
Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name
--- | --- | --- | --- | --- | ---
1 | Weighted Sum @ 0.25 | AnythingV3.0 | Stable Diffusion 1.4 | n/a | AnyV3-SD1.4
2 | Add Difference @ 1 | AnyV3-SD1.4 | Zeipher F111 | Stable Diffusion 1.4 | strawberry-lite
3 | Weighted Sum @ 0.15 | strawberry-lite | r34_e4 | n/a | **strawberry mix** |
ilhami/Tr_En_AcademicTranslation | ilhami | "2022-06-12T19:05:53Z" | 26 | 2 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"tr",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-06-12T09:19:05Z" | ---
language:
- tr
- en
tags:
- translation
license: apache-2.0
datasets:
- Parallel Corpora for Turkish-English Academic Translations
metrics:
- bleu
- sacrebleu
---
## Model Details
- **Developed by:** İlhami SEL
- **Model type:** Turkish-English Machine Translation -- Transformer Based(6 Layer)
- **Language:** Turkish - English
- **Resources for more information:** Sel, İ. , Üzen, H. & Hanbay, D. (2021). Creating a Parallel Corpora for Turkish-English Academic Translations . Computer Science , 5th International Artificial Intelligence and Data Processing symposium , 335-340 . DOI: 10.53070/bbd.990959
```python
checkpoint = "ilhami/Tr_En_AcademicTranslation"
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint).to("cuda")
tr= ["Sohbet robotları son yıllarda yaygın bir şekilde kullanılmaya başlanmıştır. ",
"İnsanları taklit eden ve daha iyi müşteri memnuniyeti sağlayan sohbet robotları en gelişkin doğal dil işleme tekniklerine ihtiyaç duymaktadır. ",
"Bu çalışma sohbet robotu konuşmalarının niyet tahminini geliştirmeye odaklanmıştır." ,
"Kelime gösterimi için TF-IDF, Doc2vec ve BERT gibi geleneksel ve gelişmiş doğal dil işleme yöntemleri, çoklu sınıf ve çoklu etiket tahmini için ise lojistik regresyon, rastgele orman ve yapay sinir ağları kullanılmıştır." ,
"Sohbet robotu konuşma veri kümeleri, sinema bileti rezervasyonu, restoran rezervasyonu ve taksi çağırma olmak üzere üç farklı alandan alınmıştır. ",
"Bu çalışmanın sonunda, BERT ve BERT ile TF-IDF birleşimi modellerin diğer kombinasyonlardan daha iyi sonuç verdiği görülmüştür. ",
"BERT gibi ön eğitimli modellerden faydalanmanın daha iyi bağlamsal anlama sağladığı ortaya çıkmıştır. ",
"TF-IDF yerleştirmeleri, BERT gösterimi ile birleştirilerek niyet kategorisi tahmininin iyileştirilmesi amaçlanmıştır."]
encoded_text = tokenizer(tr, return_tensors="pt", padding = True).to("cuda")
generated_tokens = model.generate(**encoded_text)
en = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
```
|
sosoai/hansoldeco-gemma-2-2b-v0.1-mlx | sosoai | "2024-08-07T01:08:42Z" | 7 | 0 | mlx | [
"mlx",
"safetensors",
"gemma2",
"en",
"ko",
"license:gemma",
"region:us"
] | null | "2024-08-07T00:44:03Z" | ---
language:
- en
- ko
license: gemma
tags:
- mlx
---
# sosoai/hansoldeco-gemma-2-2b-v0.1-mlx
The Model [sosoai/hansoldeco-gemma-2-2b-v0.1-mlx](https://huggingface.co/sosoai/hansoldeco-gemma-2-2b-v0.1-mlx) was converted to MLX format from [sosoai/hansoldeco-gemma-2-2b-v0.1](https://huggingface.co/sosoai/hansoldeco-gemma-2-2b-v0.1) using mlx-lm version **0.16.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("sosoai/hansoldeco-gemma-2-2b-v0.1-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
davidschulte/ESM_KBLab__overlim_sst_da | davidschulte | "2025-03-28T12:49:11Z" | 26 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:KBLab/overlim",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-12-05T17:11:09Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
KingKazma/xsum_t5-small_p_tuning_500_10_3000_8_e5_s55555_v4_l4_v100 | KingKazma | "2023-08-13T14:12:31Z" | 3 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-13T14:12:30Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Sloba/q-FrozenLake-v1-4x4-noSlippery | Sloba | "2023-12-27T13:27:28Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-27T13:27:25Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Sloba/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mlx-community/stockmark-100b-8bit | mlx-community | "2024-05-16T05:01:17Z" | 11 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mlx",
"ja",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-16T03:01:33Z" | ---
language:
- ja
- en
license: mit
library_name: transformers
tags:
- mlx
---
# mlx-community/stockmark-100b-8bit
This model was converted to MLX format from [`stockmark/stockmark-100b`]() using mlx-lm version **0.12.1**.
Refer to the [original model card](https://huggingface.co/stockmark/stockmark-100b) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/stockmark-100b-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
DevQuasar/lightblue.lb-reranker-0.5B-v1.0-GGUF | DevQuasar | "2025-02-01T23:16:59Z" | 197 | 0 | null | [
"gguf",
"text-generation",
"base_model:lightblue/lb-reranker-0.5B-v1.0",
"base_model:quantized:lightblue/lb-reranker-0.5B-v1.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-01-17T18:11:18Z" | ---
base_model:
- lightblue/lb-reranker-0.5B-v1.0
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
'Make knowledge free for everyone'
Quantized version of: [lightblue/lb-reranker-0.5B-v1.0](https://huggingface.co/lightblue/lb-reranker-0.5B-v1.0)
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
dhrf/ppo-LunarLander-v2 | dhrf | "2023-03-19T12:26:28Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-19T12:26:06Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.56 +/- 21.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
akmmsr/bert-finetuned-ner | akmmsr | "2022-11-23T12:31:34Z" | 69 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-11-18T12:54:34Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: akmmsr/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# akmmsr/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0266
- Validation Loss: 0.0519
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1758 | 0.0625 | 0 |
| 0.0457 | 0.0537 | 1 |
| 0.0266 | 0.0519 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ChetanSharma9704/deepseek-7b-finetuned | ChetanSharma9704 | "2025-03-28T13:06:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-27T13:54:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
butlermasango01/llama2-finetuned-merged-SCHEMA_v1 | butlermasango01 | "2025-02-22T18:40:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-22T18:39:36Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
brunoboat/Pixelcopter-PLE-v1 | brunoboat | "2023-08-06T22:22:51Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-06T22:22:49Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 14.50 +/- 10.47
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AscheZ/ALIE_0.5 | AscheZ | "2024-02-09T11:16:18Z" | 2 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"arxiv:1910.09700",
"base_model:bofenghuang/vigostral-7b-chat",
"base_model:adapter:bofenghuang/vigostral-7b-chat",
"region:us"
] | null | "2024-02-08T10:22:07Z" | ---
library_name: peft
base_model: bofenghuang/vigostral-7b-chat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
UchihaMadara/model1-thesis-3 | UchihaMadara | "2023-02-10T14:10:32Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-02-10T12:49:48Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: model1-thesis-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model1-thesis-3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1377
- Precision: 0.4527
- Recall: 0.5051
- F1: 0.4774
- Accuracy: 0.6190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 45 | 1.3105 | 0.3737 | 0.4765 | 0.4189 | 0.5364 |
| No log | 2.0 | 90 | 1.0783 | 0.4009 | 0.4523 | 0.4250 | 0.5781 |
| No log | 3.0 | 135 | 1.0601 | 0.4444 | 0.4750 | 0.4592 | 0.6127 |
| No log | 4.0 | 180 | 1.0953 | 0.4745 | 0.4876 | 0.4809 | 0.6266 |
| No log | 5.0 | 225 | 1.1377 | 0.4527 | 0.5051 | 0.4774 | 0.6190 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
vumichien/whisper-large-v2-jp | vumichien | "2023-09-12T09:11:12Z" | 27 | 11 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"ja",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-12-08T22:52:31Z" | ---
language:
- ja
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
base_model: openai/whisper-large-v2
model-index:
- name: Whisper Large V2 Japanese
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_11_0 ja
type: mozilla-foundation/common_voice_11_0
config: ja
split: test
args: ja
metrics:
- type: wer
value: 8.1166
name: Wer
- type: cer
value: 5.0032
name: Cer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-large-v2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2352
- Wer: 8.1166
- Cer: 5.0032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|
| 0.0897 | 0.1 | 1000 | 0.1884 | 11.0068 | 6.6992 |
| 0.0396 | 0.2 | 2000 | 0.1749 | 9.7399 | 5.9350 |
| 0.036 | 1.1 | 3000 | 0.1698 | 9.1419 | 5.6781 |
| 0.012 | 1.2 | 4000 | 0.1849 | 9.3041 | 5.7661 |
| 0.0151 | 2.09 | 5000 | 0.1879 | 9.1959 | 5.6761 |
| 0.0047 | 2.19 | 6000 | 0.2097 | 8.6706 | 5.4422 |
| 0.0046 | 3.09 | 7000 | 0.2040 | 8.8277 | 5.4717 |
| 0.0015 | 3.19 | 8000 | 0.2260 | 8.4949 | 5.3101 |
| 0.0013 | 4.09 | 9000 | 0.2339 | 8.3716 | 5.1471 |
| 0.0005 | 4.19 | 10000 | 0.2352 | 8.1166 | 5.0032 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
lalagi2/gemma_no_quant | lalagi2 | "2025-01-22T13:26:21Z" | 25 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-22T09:33:32Z" | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cleanrl/Amidar-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed2 | cleanrl | "2023-03-07T16:52:34Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Amidar-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-17T15:02:26Z" | ---
tags:
- Amidar-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Amidar-v5
type: Amidar-v5
metrics:
- type: mean_reward
value: 1273.30 +/- 288.11
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Amidar-v5**
This is a trained model of a PPO agent playing Amidar-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper --env-id Amidar-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Amidar-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/cleanba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Amidar-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Amidar-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Amidar-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Amidar-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
niklasm222/Qwen2.5-3B-Instruct-1K_subset-GRPO-gsm8k-prolog-prover-v1 | niklasm222 | "2025-03-12T05:47:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-12T05:45:54Z" | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** niklasm222
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rjgpinel/3dlotus | rjgpinel | "2025-02-10T17:07:01Z" | 0 | 0 | null | [
"robotics",
"dataset:rjgpinel/GEMBench",
"dataset:rjgpinel/RLBench-18Task",
"arxiv:2410.01345",
"license:mit",
"region:us"
] | robotics | "2025-02-10T16:31:39Z" | ---
license: mit
datasets:
- rjgpinel/GEMBench
- rjgpinel/RLBench-18Task
pipeline_tag: robotics
---
## Official 3D-LOTUS/3D-LOTUS++ Code Repository
You can find our code and the corresponding instructions to run our model at: https://github.com/vlc-robot/robot-3dlotus.
## Citation
If you use our 3D-LOTUS/3D-LOTUS++ model, please kindly cite our [work](https://arxiv.org/abs/2410.01345):
**BibTeX:**
```bibtex
@inproceedings{garcia25gembench,
author = {Ricardo Garcia and Shizhe Chen and Cordelia Schmid},
title = {Towards Generalizable Vision-Language Robotic Manipulation: A Benchmark and LLM-guided 3D Policy},
booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
year = {2025}
}
``` |
rod16/v1_finetuning-sentiment-model-news-samples | rod16 | "2023-07-21T01:30:45Z" | 102 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-07-20T21:16:00Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: v1_finetuning-sentiment-model-news-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# v1_finetuning-sentiment-model-news-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4173
- Accuracy: 0.8567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
chiawei0411/blip2-opt-2.7b-646-220k-captions-flickr | chiawei0411 | "2024-04-21T08:46:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-21T08:46:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sam-2577/zephyr-support-chatbot | sam-2577 | "2024-05-17T14:53:08Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-alpha-GPTQ",
"base_model:adapter:TheBloke/zephyr-7B-alpha-GPTQ",
"license:mit",
"region:us"
] | null | "2024-05-17T14:17:42Z" | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/zephyr-7B-alpha-GPTQ
model-index:
- name: zephyr-support-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-support-chatbot
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
StoriesLM/StoriesLM-v1-1953 | StoriesLM | "2024-03-09T23:10:06Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"en",
"dataset:dell-research-harvard/AmericanStories",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-03-09T22:51:33Z" | ---
license: mit
datasets:
- dell-research-harvard/AmericanStories
language:
- en
---
# StoriesLM: A Family of Language Models With Sequentially-Expanding Pretraining Windows
## Model Family
StoriesLM is a family of language models with sequentially-expanding pretraining windows. The pretraining data for the model family comes from the American Stories dataset—a collection of language from historical American news articles. The first language model in the StoriesLM family is pretrained on language data from 1900. Each subsequent language model further trains the previous year’s model checkpoint using data from the following year, up until 1963.
## Dataset
The StoriesLM family is pretrained on the American Stories dataset. If you use a model from this family, please also cite the original dataset's authors:
```
@article{dell2024american,
title={American stories: A large-scale structured text dataset of historical us newspapers},
author={Dell, Melissa and Carlson, Jacob and Bryan, Tom and Silcock, Emily and Arora, Abhishek and Shen, Zejiang and D'Amico-Wong, Luca and Le, Quan and Querubin, Pablo and Heldring, Leander},
journal={Advances in Neural Information Processing Systems},
volume={36},
year={2024}
}
```
|
phoner45/package-bhh-model-fine-tune | phoner45 | "2024-11-08T04:02:41Z" | 131 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"th",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-08T02:42:10Z" | ---
base_model: meta-llama/Llama-3.2-1B
library_name: transformers
model_name: package-bhh-model-fine-tune
tags:
- generated_from_trainer
- trl
- sft
licence: license
language:
- th
pipeline_tag: text-generation
---
# Model Card for package-bhh-model-fine-tune
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="phoner45/package-bhh-model-fine-tune", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/natthaphat-le-walailak-university/huggingface/runs/as2eeaqc)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.2
- Pytorch: 2.5.0+cu121
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mii-community/zefiro-7b-sft-ITA | mii-community | "2024-02-21T09:03:14Z" | 15 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"it",
"dataset:mii-community/ultrafeedback-translated-ita",
"arxiv:2310.16944",
"arxiv:2312.09993",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-20T11:50:09Z" | ---
license: apache-2.0
datasets:
- mii-community/ultrafeedback-translated-ita
language:
- it
pipeline_tag: text-generation
---
<img src="https://hoodie-creator.s3.eu-west-1.amazonaws.com/15be78c6-original.png" alt="zefiro" border="0" width="400px">
# Model Card for zefiro-7b-sft-ITA
*Last Update: 20/02/2024*<br>
<!-- Provide a quick summary of what the model is/does. -->
Zefiro is a SFT fine tuned model for the Italian language based on [Zefiro-base-7b-ita](https://huggingface.co/mii-community/zefiro-7b-base-ITA) .
To create a set of open source models and datasets suited for italian language is the aim of the project and this is the first experiment.
The model can be used as base model for more specific conversationl tasks for Italian language
## Model Details
Zefiro is a porting of the [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) model to the italian language using the wonderful recipes
from [alignment-handbook](https://huggingface.co/alignment-handbook) . It has also taken ispiration and insights from the [Llamantino](https://huggingface.co/swap-uniba/LLaMAntino-2-chat-7b-hf-UltraChat-ITA) model
developed by Università di Bari. For the implementation we combined different approaches from the two models mentioned but also from the wondeful communtity of open source.
## Model description
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily Italian
- **License:** Apache 2
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- **Developed by:** [giux78](https://alessandroercolani.webflow.io/)
- **Funded by:** [Business Operating System](https://www.businessos.xyz)
## Code
I followed the [alingnment handbook](https://github.com/huggingface/alignment-handbook/blob/main/recipes/zephyr-7b-beta/sft/config_qlora.yaml) from HuggingfaceH4 team
, I just changed the base model and some parameters
## Computation
It has been trained on two A100 from [seeweb.it](https://www.seeweb.it/) who sponsered the training. I strongly
suggest is one of the cheapest and solid GPUs provider.
## Evaluations:
| Model | Arc-c | HellaS | MMUL | AVG |
| --- | --- | --- | --- | --- |
| Mixtral 7x8 | 52.8 | 75.1 | 70.9 | 66.26666667 |
| LLama2 70b | 49.4 | 70.9 | 65.1 | 61.8 |
| zefiro-dpo-7b | 52.69 | 67.09 | 50.8 | 56.86 |
| zefiro-base-7b | 51.07 | 63.47 | 52.97 | 55.83666667 |
| **zefiro-sft-7b** | **50.98** | **62.71** | **51.96** | **55.21666667** |
| LLama1 34B | 42.9 | 65.4 | 49.0 | 52.43333333 |
## Intended uses & limitations
The model was initially fine-tuned on a filtered and preprocessed version of [UltraChat-ITA](https://huggingface.co/datasets/mii-community/ultrafeedback-translated-ita) that is a filtered version of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
Here's how you can run the model using Transformers from 🤗 :
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mii-community/zefiro-7b-sft-ITA"
model = AutoModelForCausalLM.from_pretrained(model_id)
model.to('cuda')
tokenizer = AutoTokenizer.from_pretrained(model_id, padding_side="left")
sys_prompt = "Sei un assistente disponibile, rispettoso e onesto. " \
"Rispondi sempre nel modo piu' utile possibile, pur essendo sicuro. " \
"Le risposte non devono includere contenuti dannosi, non etici, razzisti, sessisti, tossici, pericolosi o illegali. " \
"Assicurati che le tue risposte siano socialmente imparziali e positive. " \
"Se una domanda non ha senso o non e' coerente con i fatti, spiegane il motivo invece di rispondere in modo non corretto. " \
"Se non conosci la risposta a una domanda, non condividere informazioni false."
messages = [{ 'content' : sys_prompt, 'role' : 'assistant'},
{'content' : 'Crea una lista su cosa mangiare a pranzo ogni giorno della settimana a pranzo e cena', 'role' : 'user'}]
def generate_text(sys_prompt, user_prompt):
messages = [{ 'content' : sys_prompt, 'role' : 'assistant'},
{'content' : user_prompt, 'role' : 'user'}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
generated_ids = model.generate(**model_inputs, max_new_tokens=1024)
return tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
generate_text(sys_prompt, 'cosa ne pensi della politica italiana?')
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Zefiro-7b-sft-ITA has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
We used [UltraChat-ITA](https://huggingface.co/datasets/mii-community/ultrafeedback-translated-ita) as training data that is a filtered version of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat).
For translating the dataset we combined different tools and API we are also evaluating the best approach for translating many more datasets.
We have seen that the translation phase is critical and can introduce incorrect syntax and semantics.
#### Summary
Zefiro-7b-beta-ITA-v0.1 is finetuned version of mistral-7b using the zephyr approach for the italian language.
## Citation
```
@misc{tunstall2023zephyr,
title={Zephyr: Direct Distillation of LM Alignment},
author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf},
year={2023},
eprint={2310.16944},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@misc{basile2023llamantino,
title={LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian Language},
author={Pierpaolo Basile and Elio Musacchio and Marco Polignano and Lucia Siciliani and Giuseppe Fiameni and Giovanni Semeraro},
year={2023},
eprint={2312.09993},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Model Card Authors
[giux78](https://huggingface.co/giux78)
## Model Card Contact
**[email protected] |
MayBashendy/ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k13_task2_organization | MayBashendy | "2024-12-23T14:10:35Z" | 163 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-23T13:44:26Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k13_task2_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k13_task2_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8324
- Qwk: 0.5543
- Mse: 0.8324
- Rmse: 0.9124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0274 | 2 | 4.2421 | -0.0152 | 4.2421 | 2.0596 |
| No log | 0.0548 | 4 | 2.1092 | 0.0512 | 2.1092 | 1.4523 |
| No log | 0.0822 | 6 | 1.1188 | 0.0569 | 1.1188 | 1.0577 |
| No log | 0.1096 | 8 | 0.9041 | -0.0455 | 0.9041 | 0.9508 |
| No log | 0.1370 | 10 | 0.7720 | 0.1770 | 0.7720 | 0.8786 |
| No log | 0.1644 | 12 | 0.7869 | 0.1146 | 0.7869 | 0.8871 |
| No log | 0.1918 | 14 | 0.7314 | 0.1895 | 0.7314 | 0.8552 |
| No log | 0.2192 | 16 | 0.7409 | 0.1404 | 0.7409 | 0.8607 |
| No log | 0.2466 | 18 | 0.7329 | 0.1442 | 0.7329 | 0.8561 |
| No log | 0.2740 | 20 | 0.6782 | 0.2014 | 0.6782 | 0.8236 |
| No log | 0.3014 | 22 | 0.6884 | 0.2080 | 0.6884 | 0.8297 |
| No log | 0.3288 | 24 | 0.6899 | 0.2207 | 0.6899 | 0.8306 |
| No log | 0.3562 | 26 | 0.6775 | 0.2418 | 0.6775 | 0.8231 |
| No log | 0.3836 | 28 | 0.6507 | 0.2587 | 0.6507 | 0.8066 |
| No log | 0.4110 | 30 | 0.6960 | 0.2061 | 0.6960 | 0.8343 |
| No log | 0.4384 | 32 | 0.7944 | 0.2807 | 0.7944 | 0.8913 |
| No log | 0.4658 | 34 | 0.7414 | 0.3111 | 0.7414 | 0.8611 |
| No log | 0.4932 | 36 | 0.6380 | 0.3760 | 0.6380 | 0.7987 |
| No log | 0.5205 | 38 | 0.5923 | 0.3360 | 0.5923 | 0.7696 |
| No log | 0.5479 | 40 | 0.6151 | 0.2708 | 0.6151 | 0.7843 |
| No log | 0.5753 | 42 | 0.7606 | 0.2771 | 0.7606 | 0.8721 |
| No log | 0.6027 | 44 | 0.6306 | 0.3448 | 0.6306 | 0.7941 |
| No log | 0.6301 | 46 | 0.5718 | 0.3755 | 0.5718 | 0.7561 |
| No log | 0.6575 | 48 | 0.6216 | 0.4913 | 0.6216 | 0.7884 |
| No log | 0.6849 | 50 | 0.5880 | 0.4949 | 0.5880 | 0.7668 |
| No log | 0.7123 | 52 | 0.6483 | 0.4842 | 0.6483 | 0.8052 |
| No log | 0.7397 | 54 | 0.9833 | 0.3097 | 0.9833 | 0.9916 |
| No log | 0.7671 | 56 | 0.8750 | 0.3091 | 0.8750 | 0.9354 |
| No log | 0.7945 | 58 | 0.6754 | 0.4132 | 0.6754 | 0.8218 |
| No log | 0.8219 | 60 | 0.5803 | 0.5150 | 0.5803 | 0.7618 |
| No log | 0.8493 | 62 | 0.6265 | 0.5245 | 0.6265 | 0.7915 |
| No log | 0.8767 | 64 | 0.5691 | 0.4731 | 0.5691 | 0.7544 |
| No log | 0.9041 | 66 | 0.5722 | 0.4918 | 0.5722 | 0.7564 |
| No log | 0.9315 | 68 | 0.6054 | 0.4823 | 0.6054 | 0.7781 |
| No log | 0.9589 | 70 | 0.6513 | 0.5078 | 0.6513 | 0.8071 |
| No log | 0.9863 | 72 | 0.6950 | 0.5129 | 0.6950 | 0.8337 |
| No log | 1.0137 | 74 | 0.7957 | 0.5172 | 0.7957 | 0.8920 |
| No log | 1.0411 | 76 | 0.7712 | 0.5154 | 0.7712 | 0.8782 |
| No log | 1.0685 | 78 | 0.7345 | 0.4963 | 0.7345 | 0.8571 |
| No log | 1.0959 | 80 | 0.7177 | 0.5088 | 0.7177 | 0.8472 |
| No log | 1.1233 | 82 | 0.6920 | 0.5257 | 0.6920 | 0.8319 |
| No log | 1.1507 | 84 | 0.8172 | 0.5316 | 0.8172 | 0.9040 |
| No log | 1.1781 | 86 | 0.8462 | 0.4759 | 0.8462 | 0.9199 |
| No log | 1.2055 | 88 | 0.6148 | 0.5198 | 0.6148 | 0.7841 |
| No log | 1.2329 | 90 | 0.7003 | 0.3880 | 0.7003 | 0.8369 |
| No log | 1.2603 | 92 | 0.6724 | 0.3847 | 0.6724 | 0.8200 |
| No log | 1.2877 | 94 | 0.5739 | 0.5004 | 0.5739 | 0.7576 |
| No log | 1.3151 | 96 | 1.0146 | 0.4167 | 1.0146 | 1.0073 |
| No log | 1.3425 | 98 | 1.3379 | 0.3656 | 1.3379 | 1.1567 |
| No log | 1.3699 | 100 | 1.0989 | 0.4541 | 1.0989 | 1.0483 |
| No log | 1.3973 | 102 | 0.6980 | 0.5364 | 0.6980 | 0.8355 |
| No log | 1.4247 | 104 | 0.7076 | 0.4850 | 0.7076 | 0.8412 |
| No log | 1.4521 | 106 | 0.6797 | 0.5102 | 0.6797 | 0.8245 |
| No log | 1.4795 | 108 | 0.9440 | 0.4863 | 0.9440 | 0.9716 |
| No log | 1.5068 | 110 | 1.2948 | 0.4163 | 1.2948 | 1.1379 |
| No log | 1.5342 | 112 | 1.2260 | 0.4007 | 1.2260 | 1.1072 |
| No log | 1.5616 | 114 | 0.8928 | 0.4578 | 0.8928 | 0.9449 |
| No log | 1.5890 | 116 | 0.7653 | 0.5214 | 0.7653 | 0.8748 |
| No log | 1.6164 | 118 | 0.7820 | 0.5289 | 0.7820 | 0.8843 |
| No log | 1.6438 | 120 | 0.7886 | 0.5214 | 0.7886 | 0.8880 |
| No log | 1.6712 | 122 | 0.8654 | 0.5228 | 0.8654 | 0.9302 |
| No log | 1.6986 | 124 | 0.9674 | 0.5082 | 0.9674 | 0.9836 |
| No log | 1.7260 | 126 | 1.2223 | 0.4765 | 1.2223 | 1.1056 |
| No log | 1.7534 | 128 | 1.2753 | 0.4711 | 1.2753 | 1.1293 |
| No log | 1.7808 | 130 | 0.9563 | 0.4937 | 0.9563 | 0.9779 |
| No log | 1.8082 | 132 | 0.7437 | 0.5378 | 0.7437 | 0.8624 |
| No log | 1.8356 | 134 | 0.8834 | 0.4995 | 0.8834 | 0.9399 |
| No log | 1.8630 | 136 | 0.8918 | 0.4995 | 0.8918 | 0.9443 |
| No log | 1.8904 | 138 | 0.7579 | 0.5053 | 0.7579 | 0.8706 |
| No log | 1.9178 | 140 | 1.0206 | 0.5359 | 1.0206 | 1.0102 |
| No log | 1.9452 | 142 | 1.2826 | 0.4555 | 1.2826 | 1.1325 |
| No log | 1.9726 | 144 | 1.0771 | 0.4984 | 1.0771 | 1.0378 |
| No log | 2.0 | 146 | 0.8257 | 0.5284 | 0.8257 | 0.9087 |
| No log | 2.0274 | 148 | 0.7744 | 0.5294 | 0.7744 | 0.8800 |
| No log | 2.0548 | 150 | 0.7676 | 0.5359 | 0.7676 | 0.8761 |
| No log | 2.0822 | 152 | 0.7872 | 0.5910 | 0.7872 | 0.8873 |
| No log | 2.1096 | 154 | 0.8785 | 0.54 | 0.8785 | 0.9373 |
| No log | 2.1370 | 156 | 0.9011 | 0.5023 | 0.9011 | 0.9492 |
| No log | 2.1644 | 158 | 0.8695 | 0.5569 | 0.8695 | 0.9324 |
| No log | 2.1918 | 160 | 0.8799 | 0.5012 | 0.8799 | 0.9380 |
| No log | 2.2192 | 162 | 0.9597 | 0.4856 | 0.9597 | 0.9797 |
| No log | 2.2466 | 164 | 0.9816 | 0.5195 | 0.9816 | 0.9908 |
| No log | 2.2740 | 166 | 1.2485 | 0.4568 | 1.2485 | 1.1174 |
| No log | 2.3014 | 168 | 1.7938 | 0.3724 | 1.7938 | 1.3393 |
| No log | 2.3288 | 170 | 1.7627 | 0.3495 | 1.7627 | 1.3277 |
| No log | 2.3562 | 172 | 1.2953 | 0.3804 | 1.2953 | 1.1381 |
| No log | 2.3836 | 174 | 0.7714 | 0.5515 | 0.7714 | 0.8783 |
| No log | 2.4110 | 176 | 0.6365 | 0.5608 | 0.6365 | 0.7978 |
| No log | 2.4384 | 178 | 0.6294 | 0.5577 | 0.6294 | 0.7933 |
| No log | 2.4658 | 180 | 0.7167 | 0.5572 | 0.7167 | 0.8466 |
| No log | 2.4932 | 182 | 0.8662 | 0.5582 | 0.8662 | 0.9307 |
| No log | 2.5205 | 184 | 1.0356 | 0.4930 | 1.0356 | 1.0176 |
| No log | 2.5479 | 186 | 1.1789 | 0.4827 | 1.1789 | 1.0858 |
| No log | 2.5753 | 188 | 1.3201 | 0.4597 | 1.3201 | 1.1490 |
| No log | 2.6027 | 190 | 1.1444 | 0.4849 | 1.1444 | 1.0698 |
| No log | 2.6301 | 192 | 1.0868 | 0.5009 | 1.0868 | 1.0425 |
| No log | 2.6575 | 194 | 1.1584 | 0.4748 | 1.1584 | 1.0763 |
| No log | 2.6849 | 196 | 1.2044 | 0.4744 | 1.2044 | 1.0974 |
| No log | 2.7123 | 198 | 1.1532 | 0.5002 | 1.1532 | 1.0739 |
| No log | 2.7397 | 200 | 1.2993 | 0.4315 | 1.2993 | 1.1399 |
| No log | 2.7671 | 202 | 1.1806 | 0.4691 | 1.1806 | 1.0866 |
| No log | 2.7945 | 204 | 1.0857 | 0.4858 | 1.0857 | 1.0420 |
| No log | 2.8219 | 206 | 0.9406 | 0.5031 | 0.9406 | 0.9698 |
| No log | 2.8493 | 208 | 1.0283 | 0.4826 | 1.0283 | 1.0141 |
| No log | 2.8767 | 210 | 1.4159 | 0.3590 | 1.4159 | 1.1899 |
| No log | 2.9041 | 212 | 1.6849 | 0.3455 | 1.6849 | 1.2980 |
| No log | 2.9315 | 214 | 1.5028 | 0.3600 | 1.5028 | 1.2259 |
| No log | 2.9589 | 216 | 1.1936 | 0.3942 | 1.1936 | 1.0925 |
| No log | 2.9863 | 218 | 0.9323 | 0.5336 | 0.9323 | 0.9656 |
| No log | 3.0137 | 220 | 0.7178 | 0.5706 | 0.7178 | 0.8472 |
| No log | 3.0411 | 222 | 0.6864 | 0.5473 | 0.6864 | 0.8285 |
| No log | 3.0685 | 224 | 0.7470 | 0.5530 | 0.7470 | 0.8643 |
| No log | 3.0959 | 226 | 0.8840 | 0.5482 | 0.8840 | 0.9402 |
| No log | 3.1233 | 228 | 1.0682 | 0.4988 | 1.0682 | 1.0335 |
| No log | 3.1507 | 230 | 1.0229 | 0.5072 | 1.0229 | 1.0114 |
| No log | 3.1781 | 232 | 0.8268 | 0.5374 | 0.8268 | 0.9093 |
| No log | 3.2055 | 234 | 0.7989 | 0.5700 | 0.7989 | 0.8938 |
| No log | 3.2329 | 236 | 0.8782 | 0.5452 | 0.8782 | 0.9371 |
| No log | 3.2603 | 238 | 1.1297 | 0.4670 | 1.1297 | 1.0629 |
| No log | 3.2877 | 240 | 1.2590 | 0.4541 | 1.2590 | 1.1221 |
| No log | 3.3151 | 242 | 1.1482 | 0.4502 | 1.1482 | 1.0715 |
| No log | 3.3425 | 244 | 1.0642 | 0.4416 | 1.0642 | 1.0316 |
| No log | 3.3699 | 246 | 0.9366 | 0.4918 | 0.9366 | 0.9678 |
| No log | 3.3973 | 248 | 0.8912 | 0.5259 | 0.8912 | 0.9440 |
| No log | 3.4247 | 250 | 0.8993 | 0.5092 | 0.8993 | 0.9483 |
| No log | 3.4521 | 252 | 0.7912 | 0.5462 | 0.7912 | 0.8895 |
| No log | 3.4795 | 254 | 0.6999 | 0.5678 | 0.6999 | 0.8366 |
| No log | 3.5068 | 256 | 0.6922 | 0.5738 | 0.6922 | 0.8320 |
| No log | 3.5342 | 258 | 0.7488 | 0.5313 | 0.7488 | 0.8653 |
| No log | 3.5616 | 260 | 0.6995 | 0.5704 | 0.6995 | 0.8363 |
| No log | 3.5890 | 262 | 0.7204 | 0.5743 | 0.7204 | 0.8488 |
| No log | 3.6164 | 264 | 0.7579 | 0.5599 | 0.7579 | 0.8706 |
| No log | 3.6438 | 266 | 0.7453 | 0.5503 | 0.7453 | 0.8633 |
| No log | 3.6712 | 268 | 0.8802 | 0.5469 | 0.8802 | 0.9382 |
| No log | 3.6986 | 270 | 0.9347 | 0.5198 | 0.9347 | 0.9668 |
| No log | 3.7260 | 272 | 0.9041 | 0.5462 | 0.9041 | 0.9509 |
| No log | 3.7534 | 274 | 0.9782 | 0.4688 | 0.9782 | 0.9891 |
| No log | 3.7808 | 276 | 1.0847 | 0.4728 | 1.0847 | 1.0415 |
| No log | 3.8082 | 278 | 0.9741 | 0.4699 | 0.9741 | 0.9870 |
| No log | 3.8356 | 280 | 0.8039 | 0.5626 | 0.8039 | 0.8966 |
| No log | 3.8630 | 282 | 0.6992 | 0.5395 | 0.6992 | 0.8362 |
| No log | 3.8904 | 284 | 0.6733 | 0.5562 | 0.6733 | 0.8206 |
| No log | 3.9178 | 286 | 0.7078 | 0.5504 | 0.7078 | 0.8413 |
| No log | 3.9452 | 288 | 0.8144 | 0.5450 | 0.8144 | 0.9025 |
| No log | 3.9726 | 290 | 0.9239 | 0.5230 | 0.9239 | 0.9612 |
| No log | 4.0 | 292 | 0.8983 | 0.5320 | 0.8983 | 0.9478 |
| No log | 4.0274 | 294 | 0.8287 | 0.5286 | 0.8287 | 0.9103 |
| No log | 4.0548 | 296 | 0.7662 | 0.5179 | 0.7662 | 0.8753 |
| No log | 4.0822 | 298 | 0.7526 | 0.5188 | 0.7526 | 0.8675 |
| No log | 4.1096 | 300 | 0.7347 | 0.5166 | 0.7347 | 0.8572 |
| No log | 4.1370 | 302 | 0.8174 | 0.5422 | 0.8174 | 0.9041 |
| No log | 4.1644 | 304 | 0.9810 | 0.4885 | 0.9810 | 0.9905 |
| No log | 4.1918 | 306 | 0.9052 | 0.4962 | 0.9052 | 0.9514 |
| No log | 4.2192 | 308 | 0.7496 | 0.5424 | 0.7496 | 0.8658 |
| No log | 4.2466 | 310 | 0.6496 | 0.5315 | 0.6496 | 0.8059 |
| No log | 4.2740 | 312 | 0.6542 | 0.5419 | 0.6542 | 0.8088 |
| No log | 4.3014 | 314 | 0.6737 | 0.5329 | 0.6737 | 0.8208 |
| No log | 4.3288 | 316 | 0.6502 | 0.5319 | 0.6502 | 0.8064 |
| No log | 4.3562 | 318 | 0.6760 | 0.5290 | 0.6760 | 0.8222 |
| No log | 4.3836 | 320 | 0.7586 | 0.5227 | 0.7586 | 0.8710 |
| No log | 4.4110 | 322 | 0.8736 | 0.5053 | 0.8736 | 0.9347 |
| No log | 4.4384 | 324 | 0.9240 | 0.4810 | 0.9240 | 0.9613 |
| No log | 4.4658 | 326 | 0.8732 | 0.5060 | 0.8732 | 0.9345 |
| No log | 4.4932 | 328 | 0.8898 | 0.5115 | 0.8898 | 0.9433 |
| No log | 4.5205 | 330 | 0.8206 | 0.5033 | 0.8206 | 0.9059 |
| No log | 4.5479 | 332 | 0.7492 | 0.5630 | 0.7492 | 0.8656 |
| No log | 4.5753 | 334 | 0.7549 | 0.5670 | 0.7549 | 0.8688 |
| No log | 4.6027 | 336 | 0.7633 | 0.5529 | 0.7633 | 0.8737 |
| No log | 4.6301 | 338 | 0.8646 | 0.5126 | 0.8646 | 0.9298 |
| No log | 4.6575 | 340 | 0.8737 | 0.4990 | 0.8737 | 0.9347 |
| No log | 4.6849 | 342 | 0.9057 | 0.5038 | 0.9057 | 0.9517 |
| No log | 4.7123 | 344 | 0.9271 | 0.4903 | 0.9271 | 0.9629 |
| No log | 4.7397 | 346 | 0.9532 | 0.4996 | 0.9532 | 0.9763 |
| No log | 4.7671 | 348 | 0.8860 | 0.5038 | 0.8860 | 0.9413 |
| No log | 4.7945 | 350 | 0.7421 | 0.5372 | 0.7421 | 0.8614 |
| No log | 4.8219 | 352 | 0.6942 | 0.5618 | 0.6942 | 0.8332 |
| No log | 4.8493 | 354 | 0.7140 | 0.5372 | 0.7140 | 0.8450 |
| No log | 4.8767 | 356 | 0.8073 | 0.5499 | 0.8073 | 0.8985 |
| No log | 4.9041 | 358 | 0.8803 | 0.5485 | 0.8803 | 0.9383 |
| No log | 4.9315 | 360 | 0.8704 | 0.5366 | 0.8704 | 0.9329 |
| No log | 4.9589 | 362 | 0.7833 | 0.54 | 0.7833 | 0.8851 |
| No log | 4.9863 | 364 | 0.6801 | 0.5785 | 0.6801 | 0.8247 |
| No log | 5.0137 | 366 | 0.6927 | 0.4934 | 0.6927 | 0.8323 |
| No log | 5.0411 | 368 | 0.7183 | 0.5 | 0.7183 | 0.8475 |
| No log | 5.0685 | 370 | 0.7219 | 0.5223 | 0.7219 | 0.8497 |
| No log | 5.0959 | 372 | 0.7556 | 0.5747 | 0.7556 | 0.8692 |
| No log | 5.1233 | 374 | 0.7327 | 0.5750 | 0.7327 | 0.8560 |
| No log | 5.1507 | 376 | 0.6965 | 0.5677 | 0.6965 | 0.8346 |
| No log | 5.1781 | 378 | 0.6431 | 0.5624 | 0.6431 | 0.8019 |
| No log | 5.2055 | 380 | 0.6156 | 0.5208 | 0.6156 | 0.7846 |
| No log | 5.2329 | 382 | 0.6071 | 0.5350 | 0.6071 | 0.7792 |
| No log | 5.2603 | 384 | 0.6480 | 0.5969 | 0.6480 | 0.8050 |
| No log | 5.2877 | 386 | 0.7259 | 0.5486 | 0.7259 | 0.8520 |
| No log | 5.3151 | 388 | 0.7711 | 0.5473 | 0.7711 | 0.8781 |
| No log | 5.3425 | 390 | 0.7545 | 0.5473 | 0.7545 | 0.8686 |
| No log | 5.3699 | 392 | 0.7411 | 0.5874 | 0.7411 | 0.8609 |
| No log | 5.3973 | 394 | 0.7722 | 0.5761 | 0.7722 | 0.8787 |
| No log | 5.4247 | 396 | 0.7798 | 0.5376 | 0.7798 | 0.8831 |
| No log | 5.4521 | 398 | 0.8254 | 0.5485 | 0.8254 | 0.9085 |
| No log | 5.4795 | 400 | 0.8981 | 0.5181 | 0.8981 | 0.9477 |
| No log | 5.5068 | 402 | 0.9899 | 0.5027 | 0.9899 | 0.9950 |
| No log | 5.5342 | 404 | 1.0011 | 0.5027 | 1.0011 | 1.0006 |
| No log | 5.5616 | 406 | 1.0158 | 0.5163 | 1.0158 | 1.0079 |
| No log | 5.5890 | 408 | 0.9788 | 0.5163 | 0.9788 | 0.9893 |
| No log | 5.6164 | 410 | 0.9538 | 0.5310 | 0.9538 | 0.9766 |
| No log | 5.6438 | 412 | 0.8710 | 0.5218 | 0.8710 | 0.9333 |
| No log | 5.6712 | 414 | 0.7951 | 0.5544 | 0.7951 | 0.8917 |
| No log | 5.6986 | 416 | 0.7163 | 0.5346 | 0.7163 | 0.8464 |
| No log | 5.7260 | 418 | 0.6769 | 0.5491 | 0.6769 | 0.8227 |
| No log | 5.7534 | 420 | 0.6863 | 0.5547 | 0.6863 | 0.8284 |
| No log | 5.7808 | 422 | 0.7410 | 0.5530 | 0.7410 | 0.8608 |
| No log | 5.8082 | 424 | 0.8584 | 0.5143 | 0.8584 | 0.9265 |
| No log | 5.8356 | 426 | 0.9551 | 0.5118 | 0.9551 | 0.9773 |
| No log | 5.8630 | 428 | 0.9638 | 0.5160 | 0.9638 | 0.9818 |
| No log | 5.8904 | 430 | 0.8884 | 0.5171 | 0.8884 | 0.9425 |
| No log | 5.9178 | 432 | 0.8505 | 0.4964 | 0.8505 | 0.9222 |
| No log | 5.9452 | 434 | 0.7922 | 0.5174 | 0.7922 | 0.8901 |
| No log | 5.9726 | 436 | 0.7765 | 0.5519 | 0.7765 | 0.8812 |
| No log | 6.0 | 438 | 0.8185 | 0.5308 | 0.8185 | 0.9047 |
| No log | 6.0274 | 440 | 0.9084 | 0.5283 | 0.9084 | 0.9531 |
| No log | 6.0548 | 442 | 0.9587 | 0.5229 | 0.9587 | 0.9791 |
| No log | 6.0822 | 444 | 0.8894 | 0.5233 | 0.8894 | 0.9431 |
| No log | 6.1096 | 446 | 0.7756 | 0.5558 | 0.7756 | 0.8807 |
| No log | 6.1370 | 448 | 0.7336 | 0.5532 | 0.7336 | 0.8565 |
| No log | 6.1644 | 450 | 0.6898 | 0.5574 | 0.6898 | 0.8305 |
| No log | 6.1918 | 452 | 0.6626 | 0.5863 | 0.6626 | 0.8140 |
| No log | 6.2192 | 454 | 0.6650 | 0.5921 | 0.6650 | 0.8155 |
| No log | 6.2466 | 456 | 0.7091 | 0.5530 | 0.7091 | 0.8421 |
| No log | 6.2740 | 458 | 0.8145 | 0.5432 | 0.8145 | 0.9025 |
| No log | 6.3014 | 460 | 0.9461 | 0.5248 | 0.9461 | 0.9727 |
| No log | 6.3288 | 462 | 1.0033 | 0.5225 | 1.0033 | 1.0017 |
| No log | 6.3562 | 464 | 0.9422 | 0.5301 | 0.9422 | 0.9707 |
| No log | 6.3836 | 466 | 0.8697 | 0.5350 | 0.8697 | 0.9326 |
| No log | 6.4110 | 468 | 0.8302 | 0.5634 | 0.8302 | 0.9111 |
| No log | 6.4384 | 470 | 0.8449 | 0.5433 | 0.8449 | 0.9192 |
| No log | 6.4658 | 472 | 0.9255 | 0.5187 | 0.9255 | 0.9620 |
| No log | 6.4932 | 474 | 1.0270 | 0.5151 | 1.0270 | 1.0134 |
| No log | 6.5205 | 476 | 1.0013 | 0.5263 | 1.0013 | 1.0007 |
| No log | 6.5479 | 478 | 0.9323 | 0.5265 | 0.9323 | 0.9656 |
| No log | 6.5753 | 480 | 0.9040 | 0.5330 | 0.9040 | 0.9508 |
| No log | 6.6027 | 482 | 0.8871 | 0.5350 | 0.8871 | 0.9419 |
| No log | 6.6301 | 484 | 0.9390 | 0.5256 | 0.9390 | 0.9690 |
| No log | 6.6575 | 486 | 1.0055 | 0.5118 | 1.0055 | 1.0027 |
| No log | 6.6849 | 488 | 1.0343 | 0.4852 | 1.0343 | 1.0170 |
| No log | 6.7123 | 490 | 1.0665 | 0.4670 | 1.0665 | 1.0327 |
| No log | 6.7397 | 492 | 0.9904 | 0.5229 | 0.9904 | 0.9952 |
| No log | 6.7671 | 494 | 0.8535 | 0.5568 | 0.8535 | 0.9239 |
| No log | 6.7945 | 496 | 0.7967 | 0.5412 | 0.7967 | 0.8926 |
| No log | 6.8219 | 498 | 0.8101 | 0.5440 | 0.8101 | 0.9000 |
| 0.4867 | 6.8493 | 500 | 0.8854 | 0.5482 | 0.8854 | 0.9409 |
| 0.4867 | 6.8767 | 502 | 1.0128 | 0.5252 | 1.0128 | 1.0064 |
| 0.4867 | 6.9041 | 504 | 1.1415 | 0.4277 | 1.1415 | 1.0684 |
| 0.4867 | 6.9315 | 506 | 1.1208 | 0.4502 | 1.1208 | 1.0587 |
| 0.4867 | 6.9589 | 508 | 1.0027 | 0.5208 | 1.0027 | 1.0014 |
| 0.4867 | 6.9863 | 510 | 0.8456 | 0.5516 | 0.8456 | 0.9195 |
| 0.4867 | 7.0137 | 512 | 0.7124 | 0.5530 | 0.7124 | 0.8440 |
| 0.4867 | 7.0411 | 514 | 0.6782 | 0.5741 | 0.6782 | 0.8235 |
| 0.4867 | 7.0685 | 516 | 0.6987 | 0.5738 | 0.6987 | 0.8359 |
| 0.4867 | 7.0959 | 518 | 0.7417 | 0.5656 | 0.7417 | 0.8612 |
| 0.4867 | 7.1233 | 520 | 0.8253 | 0.5724 | 0.8253 | 0.9085 |
| 0.4867 | 7.1507 | 522 | 0.9452 | 0.5340 | 0.9452 | 0.9722 |
| 0.4867 | 7.1781 | 524 | 1.0137 | 0.5124 | 1.0137 | 1.0068 |
| 0.4867 | 7.2055 | 526 | 0.9868 | 0.5131 | 0.9868 | 0.9934 |
| 0.4867 | 7.2329 | 528 | 0.9039 | 0.5568 | 0.9039 | 0.9507 |
| 0.4867 | 7.2603 | 530 | 0.8528 | 0.5724 | 0.8528 | 0.9235 |
| 0.4867 | 7.2877 | 532 | 0.8076 | 0.5598 | 0.8076 | 0.8987 |
| 0.4867 | 7.3151 | 534 | 0.7755 | 0.5612 | 0.7755 | 0.8806 |
| 0.4867 | 7.3425 | 536 | 0.7688 | 0.5390 | 0.7688 | 0.8768 |
| 0.4867 | 7.3699 | 538 | 0.7492 | 0.5387 | 0.7493 | 0.8656 |
| 0.4867 | 7.3973 | 540 | 0.7676 | 0.5557 | 0.7676 | 0.8761 |
| 0.4867 | 7.4247 | 542 | 0.7793 | 0.5598 | 0.7793 | 0.8828 |
| 0.4867 | 7.4521 | 544 | 0.7666 | 0.5557 | 0.7666 | 0.8755 |
| 0.4867 | 7.4795 | 546 | 0.7416 | 0.5557 | 0.7416 | 0.8612 |
| 0.4867 | 7.5068 | 548 | 0.7573 | 0.5557 | 0.7573 | 0.8702 |
| 0.4867 | 7.5342 | 550 | 0.8009 | 0.5638 | 0.8009 | 0.8949 |
| 0.4867 | 7.5616 | 552 | 0.8164 | 0.5638 | 0.8164 | 0.9036 |
| 0.4867 | 7.5890 | 554 | 0.8507 | 0.5581 | 0.8507 | 0.9223 |
| 0.4867 | 7.6164 | 556 | 0.8781 | 0.5506 | 0.8781 | 0.9371 |
| 0.4867 | 7.6438 | 558 | 0.8836 | 0.5356 | 0.8836 | 0.9400 |
| 0.4867 | 7.6712 | 560 | 0.8590 | 0.5344 | 0.8590 | 0.9268 |
| 0.4867 | 7.6986 | 562 | 0.8603 | 0.5344 | 0.8603 | 0.9275 |
| 0.4867 | 7.7260 | 564 | 0.8196 | 0.5416 | 0.8196 | 0.9053 |
| 0.4867 | 7.7534 | 566 | 0.8042 | 0.5544 | 0.8042 | 0.8968 |
| 0.4867 | 7.7808 | 568 | 0.7981 | 0.5544 | 0.7981 | 0.8934 |
| 0.4867 | 7.8082 | 570 | 0.7807 | 0.5518 | 0.7807 | 0.8836 |
| 0.4867 | 7.8356 | 572 | 0.7949 | 0.5544 | 0.7949 | 0.8916 |
| 0.4867 | 7.8630 | 574 | 0.8246 | 0.5544 | 0.8246 | 0.9081 |
| 0.4867 | 7.8904 | 576 | 0.8194 | 0.5698 | 0.8194 | 0.9052 |
| 0.4867 | 7.9178 | 578 | 0.7835 | 0.5504 | 0.7835 | 0.8851 |
| 0.4867 | 7.9452 | 580 | 0.7273 | 0.5433 | 0.7273 | 0.8528 |
| 0.4867 | 7.9726 | 582 | 0.6765 | 0.5617 | 0.6765 | 0.8225 |
| 0.4867 | 8.0 | 584 | 0.6655 | 0.5839 | 0.6655 | 0.8158 |
| 0.4867 | 8.0274 | 586 | 0.6654 | 0.5839 | 0.6654 | 0.8157 |
| 0.4867 | 8.0548 | 588 | 0.6893 | 0.5617 | 0.6893 | 0.8302 |
| 0.4867 | 8.0822 | 590 | 0.7441 | 0.5433 | 0.7441 | 0.8626 |
| 0.4867 | 8.1096 | 592 | 0.7864 | 0.5504 | 0.7864 | 0.8868 |
| 0.4867 | 8.1370 | 594 | 0.8255 | 0.5595 | 0.8255 | 0.9086 |
| 0.4867 | 8.1644 | 596 | 0.8401 | 0.5595 | 0.8401 | 0.9166 |
| 0.4867 | 8.1918 | 598 | 0.8513 | 0.5391 | 0.8513 | 0.9227 |
| 0.4867 | 8.2192 | 600 | 0.8729 | 0.5391 | 0.8729 | 0.9343 |
| 0.4867 | 8.2466 | 602 | 0.8810 | 0.5555 | 0.8810 | 0.9386 |
| 0.4867 | 8.2740 | 604 | 0.8536 | 0.5391 | 0.8536 | 0.9239 |
| 0.4867 | 8.3014 | 606 | 0.8084 | 0.5426 | 0.8084 | 0.8991 |
| 0.4867 | 8.3288 | 608 | 0.7623 | 0.5464 | 0.7623 | 0.8731 |
| 0.4867 | 8.3562 | 610 | 0.7465 | 0.5435 | 0.7465 | 0.8640 |
| 0.4867 | 8.3836 | 612 | 0.7330 | 0.5403 | 0.7330 | 0.8561 |
| 0.4867 | 8.4110 | 614 | 0.7360 | 0.5403 | 0.7360 | 0.8579 |
| 0.4867 | 8.4384 | 616 | 0.7539 | 0.5474 | 0.7539 | 0.8683 |
| 0.4867 | 8.4658 | 618 | 0.7877 | 0.5516 | 0.7877 | 0.8875 |
| 0.4867 | 8.4932 | 620 | 0.7909 | 0.5516 | 0.7909 | 0.8893 |
| 0.4867 | 8.5205 | 622 | 0.8096 | 0.5543 | 0.8096 | 0.8998 |
| 0.4867 | 8.5479 | 624 | 0.8405 | 0.5451 | 0.8405 | 0.9168 |
| 0.4867 | 8.5753 | 626 | 0.8438 | 0.5451 | 0.8438 | 0.9186 |
| 0.4867 | 8.6027 | 628 | 0.8370 | 0.5451 | 0.8370 | 0.9149 |
| 0.4867 | 8.6301 | 630 | 0.8106 | 0.5542 | 0.8106 | 0.9003 |
| 0.4867 | 8.6575 | 632 | 0.7948 | 0.5676 | 0.7948 | 0.8915 |
| 0.4867 | 8.6849 | 634 | 0.7776 | 0.5653 | 0.7776 | 0.8818 |
| 0.4867 | 8.7123 | 636 | 0.7493 | 0.5474 | 0.7493 | 0.8656 |
| 0.4867 | 8.7397 | 638 | 0.7291 | 0.5387 | 0.7291 | 0.8539 |
| 0.4867 | 8.7671 | 640 | 0.7152 | 0.5387 | 0.7152 | 0.8457 |
| 0.4867 | 8.7945 | 642 | 0.7173 | 0.5387 | 0.7173 | 0.8469 |
| 0.4867 | 8.8219 | 644 | 0.7262 | 0.5444 | 0.7262 | 0.8522 |
| 0.4867 | 8.8493 | 646 | 0.7506 | 0.5474 | 0.7506 | 0.8664 |
| 0.4867 | 8.8767 | 648 | 0.7721 | 0.5653 | 0.7721 | 0.8787 |
| 0.4867 | 8.9041 | 650 | 0.7953 | 0.5676 | 0.7953 | 0.8918 |
| 0.4867 | 8.9315 | 652 | 0.8135 | 0.5647 | 0.8135 | 0.9020 |
| 0.4867 | 8.9589 | 654 | 0.8363 | 0.5647 | 0.8363 | 0.9145 |
| 0.4867 | 8.9863 | 656 | 0.8365 | 0.5747 | 0.8365 | 0.9146 |
| 0.4867 | 9.0137 | 658 | 0.8302 | 0.5581 | 0.8302 | 0.9112 |
| 0.4867 | 9.0411 | 660 | 0.8322 | 0.5581 | 0.8322 | 0.9123 |
| 0.4867 | 9.0685 | 662 | 0.8400 | 0.5581 | 0.8400 | 0.9165 |
| 0.4867 | 9.0959 | 664 | 0.8400 | 0.5581 | 0.8400 | 0.9165 |
| 0.4867 | 9.1233 | 666 | 0.8266 | 0.5581 | 0.8266 | 0.9092 |
| 0.4867 | 9.1507 | 668 | 0.8165 | 0.5556 | 0.8165 | 0.9036 |
| 0.4867 | 9.1781 | 670 | 0.8083 | 0.5504 | 0.8083 | 0.8991 |
| 0.4867 | 9.2055 | 672 | 0.8121 | 0.5504 | 0.8121 | 0.9011 |
| 0.4867 | 9.2329 | 674 | 0.8261 | 0.5595 | 0.8261 | 0.9089 |
| 0.4867 | 9.2603 | 676 | 0.8381 | 0.5618 | 0.8381 | 0.9155 |
| 0.4867 | 9.2877 | 678 | 0.8437 | 0.5618 | 0.8437 | 0.9185 |
| 0.4867 | 9.3151 | 680 | 0.8648 | 0.5543 | 0.8648 | 0.9300 |
| 0.4867 | 9.3425 | 682 | 0.8812 | 0.5422 | 0.8812 | 0.9387 |
| 0.4867 | 9.3699 | 684 | 0.8950 | 0.5422 | 0.8950 | 0.9461 |
| 0.4867 | 9.3973 | 686 | 0.9049 | 0.5387 | 0.9049 | 0.9512 |
| 0.4867 | 9.4247 | 688 | 0.9011 | 0.5579 | 0.9011 | 0.9493 |
| 0.4867 | 9.4521 | 690 | 0.8872 | 0.5422 | 0.8872 | 0.9419 |
| 0.4867 | 9.4795 | 692 | 0.8737 | 0.5543 | 0.8737 | 0.9347 |
| 0.4867 | 9.5068 | 694 | 0.8599 | 0.5543 | 0.8599 | 0.9273 |
| 0.4867 | 9.5342 | 696 | 0.8510 | 0.5618 | 0.8510 | 0.9225 |
| 0.4867 | 9.5616 | 698 | 0.8392 | 0.5581 | 0.8392 | 0.9161 |
| 0.4867 | 9.5890 | 700 | 0.8251 | 0.5543 | 0.8251 | 0.9083 |
| 0.4867 | 9.6164 | 702 | 0.8176 | 0.5543 | 0.8176 | 0.9042 |
| 0.4867 | 9.6438 | 704 | 0.8130 | 0.5556 | 0.8130 | 0.9017 |
| 0.4867 | 9.6712 | 706 | 0.8128 | 0.5556 | 0.8128 | 0.9016 |
| 0.4867 | 9.6986 | 708 | 0.8144 | 0.5556 | 0.8144 | 0.9024 |
| 0.4867 | 9.7260 | 710 | 0.8165 | 0.5543 | 0.8165 | 0.9036 |
| 0.4867 | 9.7534 | 712 | 0.8202 | 0.5543 | 0.8202 | 0.9056 |
| 0.4867 | 9.7808 | 714 | 0.8241 | 0.5543 | 0.8241 | 0.9078 |
| 0.4867 | 9.8082 | 716 | 0.8263 | 0.5543 | 0.8263 | 0.9090 |
| 0.4867 | 9.8356 | 718 | 0.8319 | 0.5543 | 0.8319 | 0.9121 |
| 0.4867 | 9.8630 | 720 | 0.8351 | 0.5581 | 0.8351 | 0.9138 |
| 0.4867 | 9.8904 | 722 | 0.8357 | 0.5581 | 0.8357 | 0.9142 |
| 0.4867 | 9.9178 | 724 | 0.8347 | 0.5581 | 0.8347 | 0.9136 |
| 0.4867 | 9.9452 | 726 | 0.8333 | 0.5543 | 0.8333 | 0.9129 |
| 0.4867 | 9.9726 | 728 | 0.8329 | 0.5543 | 0.8329 | 0.9126 |
| 0.4867 | 10.0 | 730 | 0.8324 | 0.5543 | 0.8324 | 0.9124 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
asdfre453/ninad | asdfre453 | "2025-03-26T22:18:42Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2025-03-26T21:20:44Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
Protachevicz/FineTune_DeepSeekR1 | Protachevicz | "2025-02-26T22:17:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-26T17:23:14Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
arvnoodle/hcl-codellama-7b-instruct-javascript-lotuscript | arvnoodle | "2024-03-19T14:35:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-7b-Instruct-hf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-03-19T14:35:23Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: codellama/CodeLlama-7b-Instruct-hf
---
# Uploaded model
- **Developed by:** arvnoodle
- **License:** apache-2.0
- **Finetuned from model :** codellama/CodeLlama-7b-Instruct-hf
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
imagepipeline/Redmond-Logo-Liberte-SD1.5 | imagepipeline | "2024-02-27T12:07:14Z" | 0 | 0 | null | [
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2024-02-27T12:07:11Z" | ---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## Redmond-Logo-Liberte-SD1.5
<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/530ba359-f680-4704-aeb1-3fefb2cd632d/width=450/02503-1337.jpeg" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - The tag for the model:LogoRedAF, logo. LORA is not perfect and sometimes needs more than one gen to create good images. I recommend simple prompts. I really hope you like the LORA and use it. If you like the model and think it's worth it, you can make a donation to my Patreon or Ko-fi. Follow me in my twitter to know before all about new models: https://twitter.com/artificialguybr/
[](https://imagepipeline.io/models/Redmond-Logo-Liberte-SD1.5?id=a6726db9-8c2f-455f-9269-62b738f95ebd/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "sd1.5",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "a6726db9-8c2f-455f-9269-62b738f95ebd",
"lora_weights": "0.5"
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
atsuki-yamaguchi/tigerbot-7b-base-clp-ar | atsuki-yamaguchi | "2024-04-22T09:04:54Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ar",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-21T17:07:14Z" | ---
license: mit
language: ar
---
TigerBot-7B Arabic [LAPT + CLP]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-clp-ar"
)
tokenizer = AutoTokenizer.from_pretrained(
"aubmindlab/aragpt2-base"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-clp-ar",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
dranger003/GritLM-8x7B-iMat.GGUF | dranger003 | "2024-02-28T01:54:25Z" | 0 | 0 | gguf | [
"gguf",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-02-27T23:05:40Z" | ---
license: apache-2.0
pipeline_tag: text-generation
library_name: gguf
---
GGUF importance matrix (imatrix) quants for https://huggingface.co/GritLM/GritLM-8x7B
The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384).
> GritLM is a generative representational instruction tuned language model. It unifies text representation (embedding) and text generation into a single model achieving state-of-the-art performance on both types of tasks.
| Layers | Context | [Template (Text Representation)](https://github.com/ContextualAI/gritlm?tab=readme-ov-file#inference) | [Template (Text Generation)](https://github.com/ContextualAI/gritlm?tab=readme-ov-file#inference) |
| --- | --- | --- | --- |
| <pre>32</pre> | <pre>32768</pre> | <pre>\<s\>\<\|user\|\><br>{instruction}<br>\<\|embed\|\><br>{sample}</pre> | <pre>\<s\>\<\|user\|\><br>{prompt}<br>\<\|assistant\|\><br>{response}</pre> |
|
guillermoruiz/mex_state | guillermoruiz | "2025-01-23T17:48:40Z" | 109 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"es",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-06-24T18:59:45Z" | ---
language:
- es
license: mit
library_name: transformers
---
# Modelo de Lenguaje para el español de México
<!-- Provide a quick summary of what the model is/does. -->
Este modelo basado en Roberta se entrenó usando más de 140 millones de tweets de México en español. Recolectados entre diciembre del 2015 y febrero del 2023.
A cada mensaje se le agruegó una etiqueta de información regionalizada como sigue:
*estado* _GEO *mensaje*
Algunos ejemplos son:
- Coahuila _GEO Cómo estás amiga, nos conocemos? Soy soltero busco soltera. #PiedrasNegras #nava #allende #zaragoza
- Tamaulipas _GEO Ando de buenas que ya les devolví sus unfollows y métanselos por el culo ☺.
- BCS _GEO Ésa canción que cantas en silencio y la otra persona tmb. Bn raro.
- Tamaulipas _GEO Hoy es la primera vez que manejo en estado de ebriedad 😞🙃
Como se puede observar, se mantuvieron mayúsculas y minúsculas, emoticones y palabras mal escritas.
Por motivos de privacidad, se cambiaron las mensciones de usuario por el token _USR y las direcciones de
internet por _URL.
Los tokens que idican el estado de la república son:
|Estado|Token|
|----------|----------|
|Aguascalientes|Aguascalientes|
|Baja California|BC|
|Baja California Sur|BCS|
|Campeche|Campeche|
|Chiapas|Chiapas|
|Chihuahua|Chihuahua|
|Ciudad de México|Mexico_City|
|Coahuila de Zaragoza|Coahuila|
|Colima|Colima|
|Durango|Durango|
|Guanajuato|Guanajuato|
|Guerrero|Guerrero|
|Hidalgo|Hidalgo|
|Jalisco|Jalisco|
|Michoacán de Ocampo|Michoacán|
|Morelos|Morelos|
|México|Mexico|
|Nayarit|Nayarit|
|Nuevo León|NL|
|Oaxaca|Oaxaca|
|Puebla|Puebla|
|Querétaro|Querétaro|
|Quintana Roo|QR|
|San Luis Potosí|SLP|
|Sinaloa|Sinaloa|
|Sonora|Sonora|
|Tabasco|Tabasco|
|Tamaulipas|Tamaulipas|
|Tlaxcala|Tlaxcala|
|Veracruz de Ignacio de la Llave|Veracruz|
|Yucatán|Yucatán|
|Zacatecas|Zacatecas|
Se creó el vocabulario de tamaño 30k usando WordPiece. El modelo se entrenaron usando el enmascaramiento de palabras con probabilidad de 0.15.
Se usó el optimizador AdamW con una tasa de aprendizaje de 0.00002 durante una época.
## Uso
El modelo se puede usar con una `pipeline`:
```
from transformers import pipeline
unmasker = pipeline('fill-mask', model="guillermoruiz/mex_state")
```
```
for p in unmasker("<mask> _GEO Van a ganar los Tigres."):
print(p['token_str'], p['score'])
```
Lo que produce la salida:
```
NL 0.2888392508029938
Coahuila 0.08982843905687332
Tamaulipas 0.0630788803100586
Mexico_City 0.06246586889028549
Jalisco 0.06113814190030098
```
Lo que indica que la región más probable es Nuevo León. Otros ejemplos son:
```
for p in unmasker("<mask> _GEO Van a ganar los Xolos."):
print(p['token_str'], p['score'])
```
```
BC 0.23284225165843964
Jalisco 0.07845071703195572
Mexico_City 0.0761856958270073
Sinaloa 0.06842593103647232
Mexico 0.06353132426738739
```
```
for p in unmasker("<mask> _GEO Vamos para Pátzcuaro."):
print(p['token_str'], p['score'])
```
```
Michoacán 0.6461890339851379
Guanajuato 0.0919179916381836
Jalisco 0.07710094749927521
Sonora 0.022813264280557632
Yucatán 0.02254747971892357
```
```
for p in unmasker("<mask> _GEO Vamos para Mérida."):
print(p['token_str'], p['score'])
```
```
Yucatán 0.9046052694320679
QR 0.01990741863846779
Mexico_City 0.009980794973671436
Tabasco 0.009980794973671436
Jalisco 0.007273637689650059
```
## Información Regional
Usando las capas de atención, se extrajeron las palabras más importantes para elegir el token de región.
Esas palabras formaron el vocabulario asociado a cada una de las regiones.
Los vocabularios pudieron ser comparados para formar la siguiente matriz de similaridad.

## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Evaluación
Este modelo es el que se indica como MexSmall en la siguiente tabla.
Los conjuntos de prueba son tweets escritos en México y se puede ver que los modelos
con información regional (MexSmall y MexLarge) tiene una clara ventaja contra las
alternativas.
|Dataset | MexSmall | MexLarge | BETO | RoBERTuito | BERTIN | Metric |
|----------|----------|----------|----------|----------|----------|----------|
|RegTweets | 0.7014 | 0.7244 | 0.6843 | 0.6689 | 0.7083 | macro-F1 |
|MexEmojis | 0.5044 | 0.5047 | 0.4223 | 0.4491 | 0.4832 | macro-F1 |
|HomoMex | 0.8131 | 0.8266 | 0.8099 | 0.8283 | 0.7934 | macro-F1 |
Los conjuntos de datos [RegTweets](https://huggingface.co/datasets/guillermoruiz/RegTweets) y [MexEmojis](https://huggingface.co/datasets/guillermoruiz/MexEmojis) están disponibles en Huggingface.
En la siguiente tabla se ven los resultados en textos en español genérico.
Se puede apreciar que los modelos con información regional son muy competitivos
a las alternativas.
| Dataset | MexSmall | MexLarge | BETO | RoBERTuito | BERTIN | Metric |
|----------|----------|----------|----------|----------|----------|----------|
| HAHA | 0.8208 | 0.8215 | 0.8238 | 0.8398 | 0.8063 | F1 |
| SemEval2018 Anger | 0.6435 | 0.6523 | 0.6148 | 0.6764 | 0.5406 | pearson |
| SemEval2018 Fear | 0.7021 | 0.6993 | 0.6750 | 0.7136 | 0.6809 | pearson |
| SemEval2018 Joy | 0.7220 | 0.7226 | 0.7124 | 0.7468 | 0.7391 | pearson |
| SemEval2018 Sadness | 0.7086 | 0.7072 | 0.6781 | 0.7475 | 0.6548 | pearson |
| SemEval2018 Valence | 0.8015 | 0.7994 | 0.7569 | 0.8017 | 0.6943 | pearson |
| HOPE | 0.7115 | 0.7036 | 0.6852 | 0.7347 | 0.6872 | macro-F1 |
| RestMex 3 | 0.7528 | 0.7499 | 0.7629 | 0.7588 | 0.7583 | Special |
| HUHU | 0.7849 | 0.7932 | 0.7887 | 0.8169 | 0.7938 | F1 |
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
|
ricochet/lora-sdxl-sheppy-exel | ricochet | "2024-01-08T14:04:54Z" | 156 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:ricochet/shep-exel",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | "2024-01-08T12:53:53Z" |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: sshheeppppyy
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: false
datasets:
- ricochet/shep-exel
---
# LoRA DreamBooth - ricochet/lora-sdxl-sheppy-exel
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer.
The weights were trained on the concept prompt:
```
sshheeppppyy
```
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
device = "cuda" if torch.cuda.is_available() else "cpu"
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
pipe.to(device)
# This is where you load your trained weights
specific_safetensors = "pytorch_lora_weights.safetensors"
lora_scale = 0.9
pipe.load_lora_weights(
'ricochet/lora-sdxl-sheppy-exel',
weight_name = specific_safetensors,
# use_auth_token = True
)
prompt = "A majestic sshheeppppyy jumping from a big stone at night"
image = pipe(
prompt=prompt,
num_inference_steps=50,
cross_attention_kwargs={"scale": lora_scale}
).images[0]
```
|
Sunbird/tts-tacotron2-lug | Sunbird | "2023-01-31T19:28:56Z" | 80 | 4 | speechbrain | [
"speechbrain",
"text-to-speech",
"TTS",
"speech-synthesis",
"Tacotron2",
"lg",
"dataset:SALT-TTS",
"license:apache-2.0",
"region:us"
] | text-to-speech | "2022-09-26T12:50:19Z" | ---
language: "lg"
tags:
- text-to-speech
- TTS
- speech-synthesis
- Tacotron2
- speechbrain
license: "apache-2.0"
datasets:
- SALT-TTS
metrics:
- mos
---
# Sunbird AI Text-to-Speech (TTS) model trained on Luganda text
### Text-to-Speech (TTS) with Tacotron2 trained on Professional Studio Recordings
This repository provides all the necessary tools for Text-to-Speech (TTS) with SpeechBrain.
The pre-trained model takes in input a short text and produces a spectrogram in output. One can get the final waveform by applying a vocoder (e.g., HiFIGAN) on top of the generated spectrogram.
### Install SpeechBrain
```
pip install speechbrain
```
### Perform Text-to-Speech (TTS)
```
import torchaudio
from speechbrain.pretrained import Tacotron2
from speechbrain.pretrained import HIFIGAN
# Intialize TTS (tacotron2) and Vocoder (HiFIGAN)
tacotron2 = Tacotron2.from_hparams(source="/Sunbird/sunbird-lug-tts", savedir="tmpdir_tts")
hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-ljspeech", savedir="tmpdir_vocoder")
# Running the TTS
mel_output, mel_length, alignment = tacotron2.encode_text("Mbagaliza Christmass Enungi Nomwaka Omugya Gubaberere Gwamirembe")
# Running Vocoder (spectrogram-to-waveform)
waveforms = hifi_gan.decode_batch(mel_output)
# Save the waverform
torchaudio.save('example_TTS.wav',waveforms.squeeze(1), 22050)
```
If you want to generate multiple sentences in one-shot, you can do in this way:
```
from speechbrain.pretrained import Tacotron2
tacotron2 = Tacotron2.from_hparams(source="speechbrain/TTS_Tacotron2", savedir="tmpdir")
items = [
"Nsanyuse okukulaba",
"Erinnya lyo ggwe ani?",
"Mbagaliza Christmass Enungi Nomwaka Omugya Gubaberere Gwamirembe"
]
mel_outputs, mel_lengths, alignments = tacotron2.encode_batch(items)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. |
stablediffusionapi/fengjing | stablediffusionapi | "2025-01-20T11:20:41Z" | 29 | 0 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-03-17T03:42:55Z" | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# fengjing API Inference

## Get API Key
Get API key from [ModelsLab](https://modelslab.com/), No Payment needed.
Replace Key in below code, change **model_id** to "fengjing"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/fengjing)
Credits: [View credits](https://civitai.com/?query=fengjing)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "fengjing",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Noorrabie/2levels_6565 | Noorrabie | "2025-03-23T18:29:30Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"region:us"
] | null | "2025-03-23T18:20:06Z" | ---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 2levels_6565
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2levels_6565
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8327
- Macro F1: 0.7976
- Macro Precision: 0.8038
- Macro Recall: 0.7993
- Accuracy: 0.7982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Macro Precision | Macro Recall | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|
| No log | 1.0 | 103 | 0.4498 | 0.8015 | 0.8015 | 0.8016 | 0.8015 |
| No log | 2.0 | 206 | 0.4532 | 0.8035 | 0.8083 | 0.8049 | 0.8039 |
| No log | 3.0 | 309 | 0.5014 | 0.8066 | 0.8084 | 0.8073 | 0.8067 |
| No log | 4.0 | 412 | 0.6651 | 0.7985 | 0.8048 | 0.8002 | 0.7991 |
| 0.245 | 5.0 | 515 | 0.7793 | 0.8004 | 0.8050 | 0.8017 | 0.8008 |
| 0.245 | 6.0 | 618 | 0.8327 | 0.7976 | 0.8038 | 0.7993 | 0.7982 |
### Framework versions
- Transformers 4.43.4
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.19.1
|
mradermacher/Llama-2-7b-Llama-3.1-8B-mix-GGUF | mradermacher | "2025-03-15T15:49:40Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"dpo",
"en",
"base_model:AmberYifan/Llama-2-7b-Llama-3.1-8B-mix",
"base_model:quantized:AmberYifan/Llama-2-7b-Llama-3.1-8B-mix",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-15T15:22:33Z" | ---
base_model: AmberYifan/Llama-2-7b-Llama-3.1-8B-mix
language:
- en
library_name: transformers
model_name: Llama-2-7b-Llama-3.1-8B-mix
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- dpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AmberYifan/Llama-2-7b-Llama-3.1-8B-mix
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-Llama-3.1-8B-mix-GGUF/resolve/main/Llama-2-7b-Llama-3.1-8B-mix.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-Llama-3.1-8B-mix-GGUF/resolve/main/Llama-2-7b-Llama-3.1-8B-mix.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-Llama-3.1-8B-mix-GGUF/resolve/main/Llama-2-7b-Llama-3.1-8B-mix.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-Llama-3.1-8B-mix-GGUF/resolve/main/Llama-2-7b-Llama-3.1-8B-mix.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-Llama-3.1-8B-mix-GGUF/resolve/main/Llama-2-7b-Llama-3.1-8B-mix.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-Llama-3.1-8B-mix-GGUF/resolve/main/Llama-2-7b-Llama-3.1-8B-mix.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-Llama-3.1-8B-mix-GGUF/resolve/main/Llama-2-7b-Llama-3.1-8B-mix.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-Llama-3.1-8B-mix-GGUF/resolve/main/Llama-2-7b-Llama-3.1-8B-mix.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-Llama-3.1-8B-mix-GGUF/resolve/main/Llama-2-7b-Llama-3.1-8B-mix.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-Llama-3.1-8B-mix-GGUF/resolve/main/Llama-2-7b-Llama-3.1-8B-mix.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-Llama-3.1-8B-mix-GGUF/resolve/main/Llama-2-7b-Llama-3.1-8B-mix.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-Llama-3.1-8B-mix-GGUF/resolve/main/Llama-2-7b-Llama-3.1-8B-mix.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Surabhi-K1/working | Surabhi-K1 | "2024-04-03T12:07:12Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | "2024-03-21T20:05:09Z" | ---
license: llama2
library_name: peft
tags:
- generated_from_trainer
base_model: codellama/CodeLlama-7b-hf
model-index:
- name: working
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# working
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 15
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3229 | 1.0 | 53 | 0.6387 |
| 0.432 | 1.99 | 106 | 0.3294 |
| 0.2289 | 2.99 | 159 | 0.2288 |
| 0.1716 | 3.98 | 212 | 0.1856 |
| 0.1464 | 5.0 | 266 | 0.1649 |
| 0.1383 | 6.0 | 319 | 0.1544 |
| 0.1318 | 6.99 | 372 | 0.1511 |
| 0.1277 | 7.97 | 424 | 0.1480 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.15.0
- Tokenizers 0.15.2 |
lesso11/f9f1d1ec-0c69-420e-837a-9c786b0732d3 | lesso11 | "2025-01-01T21:44:46Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"license:llama3",
"region:us"
] | null | "2025-01-01T21:27:34Z" | ---
library_name: peft
license: llama3
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f9f1d1ec-0c69-420e-837a-9c786b0732d3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
bf16: true
chat_template: llama3
datasets:
- data_files:
- 9b44b6a631dcb1cf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9b44b6a631dcb1cf_train_data.json
type:
field_instruction: original
field_output: reference
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso11/f9f1d1ec-0c69-420e-837a-9c786b0732d3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 77GiB
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/9b44b6a631dcb1cf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f9f1d1ec-0c69-420e-837a-9c786b0732d3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f9f1d1ec-0c69-420e-837a-9c786b0732d3
warmup_steps: 10
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# f9f1d1ec-0c69-420e-837a-9c786b0732d3
This model is a fine-tuned version of [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4149 | 0.0029 | 1 | 0.3218 |
| 0.1809 | 0.0262 | 9 | 0.1867 |
| 0.1001 | 0.0523 | 18 | 0.0630 |
| 0.0576 | 0.0785 | 27 | 0.0565 |
| 0.0659 | 0.1047 | 36 | 0.0497 |
| 0.0617 | 0.1308 | 45 | 0.0485 |
| 0.0845 | 0.1570 | 54 | 0.0462 |
| 0.0589 | 0.1831 | 63 | 0.0444 |
| 0.0394 | 0.2093 | 72 | 0.0437 |
| 0.0197 | 0.2355 | 81 | 0.0443 |
| 0.0343 | 0.2616 | 90 | 0.0440 |
| 0.0871 | 0.2878 | 99 | 0.0438 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nakkati/baseline_final2 | nakkati | "2023-12-05T17:10:13Z" | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-12-05T16:23:33Z" |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: photo of Luffy, the pirate with a straw hat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - nakkati/baseline_final2
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on photo of Luffy, the pirate with a straw hat using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
LINYICHEN09/task-4-google-gemma-2b | LINYICHEN09 | "2025-02-26T02:21:37Z" | 1,491 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | "2025-02-06T09:08:56Z" | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
deepnet111/sn9-14b-star-001 | deepnet111 | "2025-01-16T17:00:26Z" | 258 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-16T16:56:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
casque/yoongonji-v1-lora-naivae-enc-4ep | casque | "2023-06-03T14:35:55Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-06-03T14:27:28Z" | ---
license: creativeml-openrail-m
---
|
ThomasROBERTparis/SNLP_XAI_hate-speech_Davidson_hate_roberta-base_LoRA_r16_lr2e-05 | ThomasROBERTparis | "2025-03-13T00:27:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-13T00:27:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ElviraL/ppo-LunarLander-v2 | ElviraL | "2024-06-17T01:27:50Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-13T08:33:26Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.82 +/- 11.94
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
certainstar/Trained-Chinese-classification | certainstar | "2024-07-08T08:12:47Z" | 112 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"zh",
"dataset:Hello-SimpleAI/HC3-Chinese",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-04T16:37:31Z" | ---
license: mit
datasets:
- Hello-SimpleAI/HC3-Chinese
metrics:
- accuracy
language:
- zh
---
- 本模型采取 `HC3的中文数据集` 对 `bert-base-chinese` 模型进行三轮训练得到结果。
- 其作用是对文本是否为 `GPT` 生成进行分类,所得 `Label` 为0,则不为 `GPT` 生成,反之为1,则是。 |
myst72/Llama-3-8B_MIFT-en_Alldata_v3_QLoRA-PIFT-JaEn_manywords-10000_v0 | myst72 | "2025-03-05T07:03:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-05T06:58:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Orbina/Orbita-v0.1 | Orbina | "2024-04-29T11:39:13Z" | 2,806 | 39 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"tr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-16T22:17:10Z" | ---
license: apache-2.0
language:
- tr
model-index:
- name: Orbita-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge TR
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc
value: 41.97
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag TR
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc
value: 48
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU TR
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.51
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA TR
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: acc
name: accuracy
value: 50.78
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande TR
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.16
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k TR
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.41
name: accuracy
---
<img src="https://huggingface.co/Orbina/Orbita-v0.1/resolve/main/orbita.png"
alt="Orbita LLM" width="500"/>
# Orbita-v0.1
This model is a Turkish Large Language Model with multiple abilites across different dimensions on the Turkish Language covering a variety of Tasks like Coding, Math, etc...
This model is an extended version of a Qwen-based Large Language Model (LLM) for Turkish. It was trained on a cleaned Turkish dataset carefully annotated to carry out turkish instructions in an accurate and organized manner. This model was fully finetuned extensively on 8x H100 GPU's for 2 days using a carefully annotated Turkish dataset.
## Model Details
- **Base Model**: Qwen 14B based LLM
- **Training Dataset**: Annotated Turkish Dataset
- **Training Method**: Full Finetuning
## Usage Examples
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Orbina/Orbita-v0.1",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Orbina/Orbita-v0.1")
prompt = "Python'da ekrana 'Merhaba Dünya' nasıl yazılır?"
messages = [
{"role": "system", "content": "Sen, Orbina AI tarafından üretilen ve verilen talimatları takip ederek en iyi cevabı üretmeye çalışan yardımcı bir yapay zekasın."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
temperature=0.3,
top_k=50,
top_p=0.9,
max_new_tokens=512,
repetition_penalty=1,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Example Generations
```
Prompt:
Türkiye'deki finansal kurumlar, yapay zeka ile birlikte gelen veri güvenliği risklerini nasıl yönetiyor, 3 örnek verir misin?
Cevap:
Türkiye'deki finansal kurumlar, yapay zeka (AI) teknolojilerinin getirdiği veri güvenliği riskleri çeşitli yöntemlerle ele almakta ve bu riskleri azaltmaya çalışmaktadır. İşte üç örnek:
1. **Veri Şifreleme:** Finansal kuruluşlar, müşteri bilgileri gibi hassas verileri şifrelemeyi kullanarak veri ihlallerine karşı koruma sağlar. AES (Advanced Encryption Standard), RSA (Rivest-Shamir-Adleman) veya ECC (Elliptik Eğri Kriptografisi) gibi güçlü şifreleme algoritmaları, verilerin okunmasını engelleyebilir.
2. **Anomali Tespit Sistemleri:** Yapay zeka destekli anomali tespit sistemleri, beklenmedik davranışları algılayabilir ve güvenlik ekiplerini uyarır. Bu sistemler, makine öğrenimi modelleri sayesinde zaman içinde daha etkili hale gelebilir ve dolandırıcılık faaliyetlerini önlemede önemli rol oynar.
3. **Sürekli İzleme ve Güvenlik Politikası Uygulama:** Türkiye'deki finansal kurumlar, sürekli izleme ve güvenlik politikalarını uygularak veri güvenliğini sürdürür. Bu süreçlerde düzenli güvenlik denetimleri yapılır ve tüm personelin güvenlik konusunda eğitildiği göz önünde bulundurulur. Ayrıca, müşterilere de güvenlik protokollerini anlatan eğitimler sunulur.
Bu örnekler, Türkiye'deki finansal kurumların yapay zeka ile ilgili veri güvenliği risklerini yönetmek için kullandıkları bazı stratejileri göstermektedir. Ancak herhangi bir güvenlik sistemi mükemmel değildir ve bu nedenle kurumlar da sürekli olarak güvenlik durumlarını değerlendirip iyileştirmeler yapmaktadırlar.
```
```
Prompt:
Python'da ekrana "Merhaba Dünya" nasıl yazılır?
Cevap:
Python'da ekrana "Merhaba Dünya" yazmak için `print` fonksiyonunu kullanabilirsiniz. İşte basit bir örnek:
``python
print("Merhaba Dünya")
``
Bu kodu çalıştırdığınızda, konsol ekranında "Merhaba Dünya" metnini göreceksiniz. `print` fonksiyonu, verilen argümanı (bu durumda "Merhaba Dünya") alır ve onu çıktı olarak verir.
```
# [Open LLM Turkish Leaderboard Evaluation Results](https://huggingface.co/spaces/malhajar/OpenLLMTurkishLeaderboard)
| Metric |Value|
|---------------------------------|----:|
|Avg. |49.47|
|AI2 Reasoning Challenge_tr |41.97|
|HellaSwag_tr |48.00|
|MMLU_tr |49.51|
|TruthfulQA_tr |50.78|
|Winogrande _tr |56.16|
|GSM8k_tr |50.41|
|
DongYa0/Mixtral-8x7B-Evalution-v0.1 | DongYa0 | "2024-03-23T10:03:31Z" | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-02-18T04:16:53Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mixtral-8x7B-v0.1
model-index:
- name: Mixtral-8x7B-Evalution-v0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mixtral-8x7B-Evalution-v0.1
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
asas-ai/acegpt_7B_8bit_qlora_xlsum | asas-ai | "2023-11-09T21:23:19Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:asas-ai/AceGPT-7B_8bit",
"base_model:finetune:asas-ai/AceGPT-7B_8bit",
"region:us"
] | null | "2023-11-09T21:22:43Z" | ---
base_model: asas-ai/AceGPT-7B_8bit
tags:
- generated_from_trainer
model-index:
- name: acegpt_7B_8bit_qlora_xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# acegpt_7B_8bit_qlora_xlsum
This model is a fine-tuned version of [asas-ai/AceGPT-7B_8bit](https://huggingface.co/asas-ai/AceGPT-7B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 1950
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.4.0
- Tokenizers 0.14.1
|
fatin098/ariyanxd | fatin098 | "2024-01-08T21:37:27Z" | 21 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:emilianJR/epiCRealism",
"base_model:adapter:emilianJR/epiCRealism",
"license:mit",
"region:us"
] | text-to-image | "2024-01-08T21:37:05Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: ariyan, blue and orange light combination with balck dress
parameters:
negative_prompt: secret *_*
output:
url: images/ariyan_10_1648734856_2806113818906474538_2108416020.jpg
base_model: emilianJR/epiCRealism
instance_prompt: null
license: mit
---
# Ariyan Chy LoRA
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/fatin098/ariyanxd/tree/main) them in the Files & versions tab.
|
lesso02/daa41b95-2080-40ea-920c-08a6d9122b32 | lesso02 | "2025-03-25T04:17:25Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:adapter:EleutherAI/pythia-14m",
"region:us"
] | null | "2025-03-25T04:10:30Z" | ---
library_name: peft
base_model: EleutherAI/pythia-14m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: daa41b95-2080-40ea-920c-08a6d9122b32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-14m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 72479e11a2a7cad2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/72479e11a2a7cad2_train_data.json
type:
field_input: prompt
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso02/daa41b95-2080-40ea-920c-08a6d9122b32
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000202
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/72479e11a2a7cad2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 20
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6832998f-481b-439e-9deb-a5b33a255e16
wandb_project: 02a
wandb_run: your_name
wandb_runid: 6832998f-481b-439e-9deb-a5b33a255e16
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# daa41b95-2080-40ea-920c-08a6d9122b32
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000202
- train_batch_size: 4
- eval_batch_size: 4
- seed: 20
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 5.2546 |
| 43.8207 | 0.3378 | 500 | 5.3151 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
NikolayKozloff/Awanllm-Llama-3-8B-Cumulus-v1.0-Q4_0-GGUF | NikolayKozloff | "2024-06-01T19:13:26Z" | 5 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:OwenArli/ArliAI-Llama-3-8B-Cumulus-v1.0",
"base_model:quantized:OwenArli/ArliAI-Llama-3-8B-Cumulus-v1.0",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-06-01T19:13:10Z" | ---
license: llama3
tags:
- llama-cpp
- gguf-my-repo
base_model: AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0
---
# NikolayKozloff/Awanllm-Llama-3-8B-Cumulus-v1.0-Q4_0-GGUF
This model was converted to GGUF format from [`AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0`](https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo NikolayKozloff/Awanllm-Llama-3-8B-Cumulus-v1.0-Q4_0-GGUF --hf-file awanllm-llama-3-8b-cumulus-v1.0-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Awanllm-Llama-3-8B-Cumulus-v1.0-Q4_0-GGUF --hf-file awanllm-llama-3-8b-cumulus-v1.0-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo NikolayKozloff/Awanllm-Llama-3-8B-Cumulus-v1.0-Q4_0-GGUF --hf-file awanllm-llama-3-8b-cumulus-v1.0-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo NikolayKozloff/Awanllm-Llama-3-8B-Cumulus-v1.0-Q4_0-GGUF --hf-file awanllm-llama-3-8b-cumulus-v1.0-q4_0.gguf -c 2048
```
|
Ramprashant/Llama-2-7b-avro | Ramprashant | "2024-06-08T13:05:16Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-08T12:58:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/marco-o1-uncensored-GGUF | mradermacher | "2024-11-26T15:58:25Z" | 407 | 3 | transformers | [
"transformers",
"gguf",
"en",
"base_model:thirdeyeai/marco-o1-uncensored",
"base_model:quantized:thirdeyeai/marco-o1-uncensored",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-26T13:08:29Z" | ---
base_model: thirdeyeai/marco-o1-uncensored
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/thirdeyeai/marco-o1-uncensored
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/bge_large_medical-GGUF | mradermacher | "2025-02-26T00:59:56Z" | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | "2025-02-26T00:58:16Z" | <!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ls-da3m0ns/bge_large_medical
|
Lilazn/DeepSeek-R1-Distill-Qwen-14B-Q4_K_M-GGUF | Lilazn | "2025-02-05T06:19:31Z" | 141 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-05T06:18:47Z" | ---
license: mit
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
---
# Lilazn/DeepSeek-R1-Distill-Qwen-14B-Q4_K_M-GGUF
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-Distill-Qwen-14B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Lilazn/DeepSeek-R1-Distill-Qwen-14B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-14b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Lilazn/DeepSeek-R1-Distill-Qwen-14B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-14b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Lilazn/DeepSeek-R1-Distill-Qwen-14B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-14b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Lilazn/DeepSeek-R1-Distill-Qwen-14B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-14b-q4_k_m.gguf -c 2048
```
|
kujirahand/whisper-small-ja | kujirahand | "2023-02-21T15:31:40Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"ja",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-02-19T05:13:35Z" | ---
language:
- ja
license: apache-2.0
tags:
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Ja - kujirahand
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: ja
split: test
args: 'config: ja, split: test'
metrics:
- name: Wer
type: wer
value: 112.99945265462507
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Ja - kujirahand
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6825
- Wer: 112.9995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.25 | 10 | 1.7314 | 23.6341 |
| No log | 0.5 | 20 | 0.7887 | 112.5439 |
| 1.6472 | 0.75 | 30 | 0.7152 | 110.8503 |
| 1.6472 | 1.0 | 40 | 0.6825 | 112.9995 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2 |
baudm/vitstr-small-patch16-224 | baudm | "2022-08-28T18:53:19Z" | 0 | 0 | null | [
"pytorch",
"image-to-text",
"en",
"license:apache-2.0",
"region:us"
] | image-to-text | "2022-08-28T18:52:01Z" | ---
language:
- en
license: apache-2.0
tags:
- image-to-text
---
# ViTSTR small v1.0
ViTSTR model pre-trained on various real [STR datasets](https://github.com/baudm/parseq/blob/main/Datasets.md) at image size 224x224 with a patch size of 16x16.
Disclaimer: this model card was not written by the original author.
## Model description
*TODO*
## Intended uses & limitations
You can use the model for STR on images containing Latin characters (62 case-sensitive alphanumeric + 32 punctuation marks).
### How to use
*TODO*
### BibTeX entry and citation info
```bibtex
@InProceedings{atienza2021vision,
title={Vision transformer for fast and efficient scene text recognition},
author={Atienza, Rowel},
booktitle={International Conference on Document Analysis and Recognition},
pages={319--334},
year={2021},
organization={Springer}
}
```
|
Poulblack/Poulblack | Poulblack | "2025-03-22T06:17:06Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2025-03-22T05:27:54Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
Fhermin/ppo-SnowballTarget2 | Fhermin | "2024-02-27T04:18:38Z" | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2024-02-27T04:18:33Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Fhermin/ppo-SnowballTarget2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mradermacher/Gaja-v2.00-GGUF | mradermacher | "2024-12-29T22:16:45Z" | 45 | 0 | transformers | [
"transformers",
"gguf",
"hindi",
"english ",
"Bilingual",
"en",
"hi",
"dataset:sarvamai/samvaad-hi-v1",
"base_model:damerajee/Gaja-v2.00",
"base_model:quantized:damerajee/Gaja-v2.00",
"license:llama2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-29T21:53:46Z" | ---
base_model: damerajee/Gaja-v2.00
datasets:
- sarvamai/samvaad-hi-v1
language:
- en
- hi
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- hindi
- 'english '
- Bilingual
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/damerajee/Gaja-v2.00
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Gaja-v2.00-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gaja-v2.00-GGUF/resolve/main/Gaja-v2.00.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gaja-v2.00-GGUF/resolve/main/Gaja-v2.00.Q3_K_S.gguf) | Q3_K_S | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Gaja-v2.00-GGUF/resolve/main/Gaja-v2.00.Q3_K_M.gguf) | Q3_K_M | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gaja-v2.00-GGUF/resolve/main/Gaja-v2.00.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gaja-v2.00-GGUF/resolve/main/Gaja-v2.00.IQ4_XS.gguf) | IQ4_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gaja-v2.00-GGUF/resolve/main/Gaja-v2.00.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gaja-v2.00-GGUF/resolve/main/Gaja-v2.00.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gaja-v2.00-GGUF/resolve/main/Gaja-v2.00.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gaja-v2.00-GGUF/resolve/main/Gaja-v2.00.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gaja-v2.00-GGUF/resolve/main/Gaja-v2.00.Q6_K.gguf) | Q6_K | 5.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gaja-v2.00-GGUF/resolve/main/Gaja-v2.00.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gaja-v2.00-GGUF/resolve/main/Gaja-v2.00.f16.gguf) | f16 | 13.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Helsinki-NLP/opus-mt-tc-big-itc-he | Helsinki-NLP | "2023-10-10T11:20:10Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"ca",
"es",
"fr",
"gl",
"he",
"it",
"pt",
"ro",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-08-12T12:28:06Z" | ---
language:
- ca
- es
- fr
- gl
- he
- it
- pt
- ro
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-itc-he
results:
- task:
name: Translation cat-heb
type: translation
args: cat-heb
dataset:
name: flores101-devtest
type: flores_101
args: cat heb devtest
metrics:
- name: BLEU
type: bleu
value: 23.0
- name: chr-F
type: chrf
value: 0.52457
- task:
name: Translation fra-heb
type: translation
args: fra-heb
dataset:
name: flores101-devtest
type: flores_101
args: fra heb devtest
metrics:
- name: BLEU
type: bleu
value: 23.2
- name: chr-F
type: chrf
value: 0.52953
- task:
name: Translation glg-heb
type: translation
args: glg-heb
dataset:
name: flores101-devtest
type: flores_101
args: glg heb devtest
metrics:
- name: BLEU
type: bleu
value: 20.8
- name: chr-F
type: chrf
value: 0.50918
- task:
name: Translation ita-heb
type: translation
args: ita-heb
dataset:
name: flores101-devtest
type: flores_101
args: ita heb devtest
metrics:
- name: BLEU
type: bleu
value: 18.3
- name: chr-F
type: chrf
value: 0.49007
- task:
name: Translation por-heb
type: translation
args: por-heb
dataset:
name: flores101-devtest
type: flores_101
args: por heb devtest
metrics:
- name: BLEU
type: bleu
value: 24.4
- name: chr-F
type: chrf
value: 0.53906
- task:
name: Translation ron-heb
type: translation
args: ron-heb
dataset:
name: flores101-devtest
type: flores_101
args: ron heb devtest
metrics:
- name: BLEU
type: bleu
value: 22.1
- name: chr-F
type: chrf
value: 0.52103
- task:
name: Translation spa-heb
type: translation
args: spa-heb
dataset:
name: flores101-devtest
type: flores_101
args: spa heb devtest
metrics:
- name: BLEU
type: bleu
value: 16.5
- name: chr-F
type: chrf
value: 0.47646
- task:
name: Translation fra-heb
type: translation
args: fra-heb
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: fra-heb
metrics:
- name: BLEU
type: bleu
value: 39.6
- name: chr-F
type: chrf
value: 0.60539
- task:
name: Translation ita-heb
type: translation
args: ita-heb
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ita-heb
metrics:
- name: BLEU
type: bleu
value: 40.0
- name: chr-F
type: chrf
value: 0.60264
- task:
name: Translation por-heb
type: translation
args: por-heb
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: por-heb
metrics:
- name: BLEU
type: bleu
value: 44.4
- name: chr-F
type: chrf
value: 0.63087
- task:
name: Translation spa-heb
type: translation
args: spa-heb
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: spa-heb
metrics:
- name: BLEU
type: bleu
value: 44.5
- name: chr-F
type: chrf
value: 0.63883
---
# opus-mt-tc-big-itc-he
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from Italic languages (itc) to Hebrew (he).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2022-08-03
- **License:** CC-BY-4.0
- **Language(s):**
- Source Language(s): cat fra glg ita lad_Latn por ron spa
- Target Language(s): heb
- Language Pair(s): cat-heb fra-heb glg-heb ita-heb por-heb ron-heb spa-heb
- Valid Target Language Labels:
- **Original Model**: [opusTCv20210807_transformer-big_2022-08-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-heb/opusTCv20210807_transformer-big_2022-08-03.zip)
- **Resources for more information:**
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- More information about released models for this language pair: [OPUS-MT itc-heb README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-heb/README.md)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"La María és feminista.",
"Contribuyan en Tatoeba."
]
model_name = "pytorch-models/opus-mt-tc-big-itc-he"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# מרי היא פמיניסטית.
# תרום לטאטואבה.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-itc-he")
print(pipe("La María és feminista."))
# expected output: מרי היא פמיניסטית.
```
## Training
- **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-08-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-heb/opusTCv20210807_transformer-big_2022-08-03.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* test set translations: [opusTCv20210807_transformer-big_2022-08-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-heb/opusTCv20210807_transformer-big_2022-08-03.test.txt)
* test set scores: [opusTCv20210807_transformer-big_2022-08-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-heb/opusTCv20210807_transformer-big_2022-08-03.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| fra-heb | tatoeba-test-v2021-08-07 | 0.60539 | 39.6 | 3281 | 20655 |
| ita-heb | tatoeba-test-v2021-08-07 | 0.60264 | 40.0 | 1706 | 9796 |
| por-heb | tatoeba-test-v2021-08-07 | 0.63087 | 44.4 | 719 | 4423 |
| spa-heb | tatoeba-test-v2021-08-07 | 0.63883 | 44.5 | 1849 | 12112 |
| cat-heb | flores101-devtest | 0.52457 | 23.0 | 1012 | 20749 |
| fra-heb | flores101-devtest | 0.52953 | 23.2 | 1012 | 20749 |
| glg-heb | flores101-devtest | 0.50918 | 20.8 | 1012 | 20749 |
| ita-heb | flores101-devtest | 0.49007 | 18.3 | 1012 | 20749 |
| por-heb | flores101-devtest | 0.53906 | 24.4 | 1012 | 20749 |
| ron-heb | flores101-devtest | 0.52103 | 22.1 | 1012 | 20749 |
| spa-heb | flores101-devtest | 0.47646 | 16.5 | 1012 | 20749 |
## Citation Information
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 8b9f0b0
* port time: Sat Aug 13 00:02:03 EEST 2022
* port machine: LM0-400-22516.local
|
RamonAnkersmit/q-Taxi-v3 | RamonAnkersmit | "2022-12-12T21:33:48Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-12T20:49:47Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="RamonAnkersmit/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mradermacher/tinyllama-fake-news-detector-v1-GGUF | mradermacher | "2024-12-23T18:29:23Z" | 9 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:takano32/tinyllama-fake-news-detector-v1",
"base_model:quantized:takano32/tinyllama-fake-news-detector-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-23T18:23:05Z" | ---
base_model: takano32/tinyllama-fake-news-detector-v1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/takano32/tinyllama-fake-news-detector-v1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/tinyllama-fake-news-detector-v1-GGUF/resolve/main/tinyllama-fake-news-detector-v1.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-fake-news-detector-v1-GGUF/resolve/main/tinyllama-fake-news-detector-v1.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-fake-news-detector-v1-GGUF/resolve/main/tinyllama-fake-news-detector-v1.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-fake-news-detector-v1-GGUF/resolve/main/tinyllama-fake-news-detector-v1.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-fake-news-detector-v1-GGUF/resolve/main/tinyllama-fake-news-detector-v1.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-fake-news-detector-v1-GGUF/resolve/main/tinyllama-fake-news-detector-v1.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-fake-news-detector-v1-GGUF/resolve/main/tinyllama-fake-news-detector-v1.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-fake-news-detector-v1-GGUF/resolve/main/tinyllama-fake-news-detector-v1.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-fake-news-detector-v1-GGUF/resolve/main/tinyllama-fake-news-detector-v1.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-fake-news-detector-v1-GGUF/resolve/main/tinyllama-fake-news-detector-v1.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-fake-news-detector-v1-GGUF/resolve/main/tinyllama-fake-news-detector-v1.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-fake-news-detector-v1-GGUF/resolve/main/tinyllama-fake-news-detector-v1.f16.gguf) | f16 | 2.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf | RichardErkhov | "2025-02-14T03:41:05Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-14T02:58:24Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint - GGUF
- Model creator: https://huggingface.co/genloop/
- Original model: https://huggingface.co/genloop/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q2_K.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q2_K.gguf) | Q2_K | 0.54GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.IQ3_XS.gguf) | IQ3_XS | 0.58GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.IQ3_S.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.IQ3_M.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.IQ3_M.gguf) | IQ3_M | 0.61GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q3_K.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q3_K.gguf) | Q3_K | 0.64GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q3_K_M.gguf) | Q3_K_M | 0.64GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q3_K_L.gguf) | Q3_K_L | 0.68GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q4_0.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q4_0.gguf) | Q4_0 | 0.72GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.IQ4_NL.gguf) | IQ4_NL | 0.72GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q4_K_S.gguf) | Q4_K_S | 0.72GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q4_K.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q4_K.gguf) | Q4_K | 0.75GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q4_K_M.gguf) | Q4_K_M | 0.75GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q4_1.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q4_1.gguf) | Q4_1 | 0.77GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q5_0.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q5_0.gguf) | Q5_0 | 0.83GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q5_K_S.gguf) | Q5_K_S | 0.83GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q5_K.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q5_K.gguf) | Q5_K | 0.85GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q5_K_M.gguf) | Q5_K_M | 0.85GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q5_1.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q5_1.gguf) | Q5_1 | 0.89GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q6_K.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q6_K.gguf) | Q6_K | 0.95GB |
| [fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q8_0.gguf](https://huggingface.co/RichardErkhov/genloop_-_fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint-gguf/blob/main/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint.Q8_0.gguf) | Q8_0 | 1.23GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Cascopia/ArielNoah | Cascopia | "2025-03-28T12:50:04Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-28T12:49:45Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
rford2471/TwinLlama-3.1-8B | rford2471 | "2025-02-11T19:36:35Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-11T19:30:11Z" | ---
base_model: meta-llama/Llama-3.1-8B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** rford2471
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LoneStriker/TinyLlama-1.1B-32k-Instruct-4.0bpw-h6-exl2 | LoneStriker | "2024-01-30T21:36:52Z" | 6 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"conversational",
"en",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/airoboros-3.2",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:LDJnr/Verified-Camel",
"dataset:HuggingFaceH4/no_robots",
"dataset:Doctor-Shotgun/no-robots-sharegpt",
"dataset:Doctor-Shotgun/capybara-sharegpt",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-01-30T21:32:50Z" | ---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
datasets:
- LDJnr/Capybara
- jondurbin/airoboros-3.2
- unalignment/toxic-dpo-v0.1
- LDJnr/Verified-Camel
- HuggingFaceH4/no_robots
- Doctor-Shotgun/no-robots-sharegpt
- Doctor-Shotgun/capybara-sharegpt
---
# Norobara-ZLoss-8x7B
This is an instruct-tuned [TinyLlama-1.1B-32k](https://huggingface.co/Doctor-Shotgun/TinyLlama-1.1B-32k) on several open-source instruct datasets, intended primarily for speculative decoding.
## Usage:
The intended prompt format is a modified multi-turn Alpaca instruction format:
```
### Instruction:
{system prompt}
### Input:
{user message}
### Response:
{model response}
### Input:
{user message}
### Response:
{model response}
(etc.)
```
## Bias, Risks, and Limitations
The model will show biases present in the base model. No ethical alignment was applied to prevent the generation of toxic or harmful outputs (in fact the opposite, with examples from toxic-DPO included), so generate at your own risk.
## Training Details
This model was trained as a full finetune for 3 epochs using a single A100 GPU for around 3.5 hours. |
thliang01/ppo-LunarLander-v2 | thliang01 | "2022-12-05T13:27:18Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-05T13:26:46Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 192.10 +/- 40.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nhung03/e521c6dc-afae-4879-9541-0ebf24de22c2 | nhung03 | "2025-02-09T15:23:40Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-7b-hf-flash",
"base_model:adapter:NousResearch/CodeLlama-7b-hf-flash",
"region:us"
] | null | "2025-02-09T15:19:19Z" | ---
library_name: peft
base_model: NousResearch/CodeLlama-7b-hf-flash
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e521c6dc-afae-4879-9541-0ebf24de22c2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-7b-hf-flash
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d053733458a8c126_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d053733458a8c126_train_data.json
type:
field_input: SeqB
field_instruction: A
field_output: SeqA
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/e521c6dc-afae-4879-9541-0ebf24de22c2
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/d053733458a8c126_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9af9d5b1-802b-417b-a789-fa7b6715c306
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9af9d5b1-802b-417b-a789-fa7b6715c306
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e521c6dc-afae-4879-9541-0ebf24de22c2
This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-7b-hf-flash) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 18.2649 | 0.0035 | 10 | 4.3058 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
joaomiranda27/Tiny-llama-pt-text-sql | joaomiranda27 | "2024-10-30T12:31:00Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-10-30T12:30:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Capri-GGUF | mradermacher | "2024-12-16T02:05:32Z" | 16 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sinch1305/Llama3-senti",
"base_model:quantized:Sinch1305/Llama3-senti",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-07-01T23:02:27Z" | ---
base_model: Sinch1305/Llama3-senti
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Sinch1305/Llama3-senti
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Capri-GGUF/resolve/main/Capri.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Capri-GGUF/resolve/main/Capri.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Capri-GGUF/resolve/main/Capri.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Capri-GGUF/resolve/main/Capri.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Capri-GGUF/resolve/main/Capri.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Capri-GGUF/resolve/main/Capri.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Capri-GGUF/resolve/main/Capri.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Capri-GGUF/resolve/main/Capri.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Capri-GGUF/resolve/main/Capri.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Capri-GGUF/resolve/main/Capri.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Capri-GGUF/resolve/main/Capri.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Capri-GGUF/resolve/main/Capri.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Capri-GGUF/resolve/main/Capri.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Capri-GGUF/resolve/main/Capri.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Capri-GGUF/resolve/main/Capri.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits