Search is not available for this dataset
modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-03-26 00:37:49
| downloads
int64 0
223M
| likes
int64 0
10.1k
| library_name
stringclasses 397
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-03-26 00:37:34
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
bartowski/G2-9B-Blackout-R1-GGUF | bartowski | "2024-09-13T21:10:17Z" | 320 | 3 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"text-generation",
"base_model:inflatebot/G2-9B-Blackout-R1",
"base_model:quantized:inflatebot/G2-9B-Blackout-R1",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-13T19:44:31Z" | ---
base_model: inflatebot/G2-9B-Blackout-R1
library_name: transformers
pipeline_tag: text-generation
tags:
- mergekit
- merge
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of G2-9B-Blackout-R1
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3715">b3715</a> for quantization.
Original model: https://huggingface.co/inflatebot/G2-9B-Blackout-R1
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
No prompt format found, check original model page
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [G2-9B-Blackout-R1-f16.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-f16.gguf) | f16 | 18.49GB | false | Full F16 weights. |
| [G2-9B-Blackout-R1-Q8_0.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q8_0.gguf) | Q8_0 | 9.83GB | false | Extremely high quality, generally unneeded but max available quant. |
| [G2-9B-Blackout-R1-Q6_K_L.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q6_K_L.gguf) | Q6_K_L | 7.81GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [G2-9B-Blackout-R1-Q6_K.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q6_K.gguf) | Q6_K | 7.59GB | false | Very high quality, near perfect, *recommended*. |
| [G2-9B-Blackout-R1-Q5_K_L.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q5_K_L.gguf) | Q5_K_L | 6.87GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [G2-9B-Blackout-R1-Q5_K_M.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q5_K_M.gguf) | Q5_K_M | 6.65GB | false | High quality, *recommended*. |
| [G2-9B-Blackout-R1-Q5_K_S.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q5_K_S.gguf) | Q5_K_S | 6.48GB | false | High quality, *recommended*. |
| [G2-9B-Blackout-R1-Q4_K_L.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q4_K_L.gguf) | Q4_K_L | 5.98GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [G2-9B-Blackout-R1-Q4_K_M.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q4_K_M.gguf) | Q4_K_M | 5.76GB | false | Good quality, default size for must use cases, *recommended*. |
| [G2-9B-Blackout-R1-Q4_K_S.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q4_K_S.gguf) | Q4_K_S | 5.48GB | false | Slightly lower quality with more space savings, *recommended*. |
| [G2-9B-Blackout-R1-Q4_0.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q4_0.gguf) | Q4_0 | 5.46GB | false | Legacy format, generally not worth using over similarly sized formats |
| [G2-9B-Blackout-R1-Q4_0_8_8.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q4_0_8_8.gguf) | Q4_0_8_8 | 5.44GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). |
| [G2-9B-Blackout-R1-Q4_0_4_8.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q4_0_4_8.gguf) | Q4_0_4_8 | 5.44GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). |
| [G2-9B-Blackout-R1-Q4_0_4_4.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q4_0_4_4.gguf) | Q4_0_4_4 | 5.44GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. |
| [G2-9B-Blackout-R1-Q3_K_XL.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q3_K_XL.gguf) | Q3_K_XL | 5.35GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [G2-9B-Blackout-R1-IQ4_XS.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-IQ4_XS.gguf) | IQ4_XS | 5.18GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [G2-9B-Blackout-R1-Q3_K_L.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q3_K_L.gguf) | Q3_K_L | 5.13GB | false | Lower quality but usable, good for low RAM availability. |
| [G2-9B-Blackout-R1-Q3_K_M.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q3_K_M.gguf) | Q3_K_M | 4.76GB | false | Low quality. |
| [G2-9B-Blackout-R1-IQ3_M.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-IQ3_M.gguf) | IQ3_M | 4.49GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [G2-9B-Blackout-R1-Q3_K_S.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q3_K_S.gguf) | Q3_K_S | 4.34GB | false | Low quality, not recommended. |
| [G2-9B-Blackout-R1-IQ3_XS.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-IQ3_XS.gguf) | IQ3_XS | 4.14GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [G2-9B-Blackout-R1-Q2_K_L.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q2_K_L.gguf) | Q2_K_L | 4.03GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [G2-9B-Blackout-R1-Q2_K.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-Q2_K.gguf) | Q2_K | 3.81GB | false | Very low quality but surprisingly usable. |
| [G2-9B-Blackout-R1-IQ2_M.gguf](https://huggingface.co/bartowski/G2-9B-Blackout-R1-GGUF/blob/main/G2-9B-Blackout-R1-IQ2_M.gguf) | IQ2_M | 3.43GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/G2-9B-Blackout-R1-GGUF --include "G2-9B-Blackout-R1-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/G2-9B-Blackout-R1-GGUF --include "G2-9B-Blackout-R1-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (G2-9B-Blackout-R1-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
llama-moe/LLaMA-MoE-v1-3_5B-2_8 | llama-moe | "2024-06-25T02:37:42Z" | 100 | 15 | transformers | [
"transformers",
"pytorch",
"llama_moe",
"text-generation",
"MoE",
"custom_code",
"en",
"arxiv:2310.06694",
"arxiv:2406.16554",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2023-12-23T14:00:42Z" | ---
license: apache-2.0
language:
- en
tags:
- MoE
---
# LLaMA-MoE-v1-3.5B (2/8)
[[π» Code]](https://github.com/pjlab-sys4nlp/llama-moe) | [[π Technical Report]](https://github.com/pjlab-sys4nlp/llama-moe/blob/main/docs/LLaMA_MoE.pdf)
π Very nice to meet you here~
β€οΈ This repo contains the model `LLaMA-MoE-v1-3.5B (2/8)`, which activates 2 out of 8 experts (3.5B parameters).
This model is NOT fine-tuned by instruction pairs, so it may not be good enough to act like a chatbot.
π’ LLaMA-MoE is a series of Mixture-of-Expert (MoE) models based on [LLaMA-2](https://huggingface.co/meta-llama/Llama-2-7b-hf).
You can find the code for training this model at [this repo](https://github.com/pjlab-sys4nlp/llama-moe).
π This series of models are obtained by partitioning original LLaMA FFNs into experts and further continual pre-training.
The total model size is only 6.7B parameters, which is very convenient for deployment and research usage.
More details could be found at [our technical report](https://arxiv.org/).
## π QuickStart
```python
# python>=3.10
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_dir = "llama-moe/LLaMA-MoE-v1-3_5B-2_8"
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_dir, torch_dtype=torch.bfloat16, trust_remote_code=True)
model.eval()
model.to("cuda:0")
input_text = "Suzhou is famous of"
inputs = tokenizer(input_text, return_tensors="pt")
inputs = inputs.to("cuda:0")
pred = model.generate(**inputs, max_length=50, temperature=0.0)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
# Suzhou is famous of its beautiful gardens. The most famous one is the Humble Administrator's Garden. It is a classical Chinese garden with a history of more than 600 years. The garden is divided into three
```
## π Performance
| Model | \#Activated Experts | \#Experts | \#Activated Params | Links |
| :------------------------ | :-----------------: | :-------: | :----------------: | :-----------------------------------------------------------------------: |
| **LLaMA-MoE-3.0B** | 2 | 16 | 3.0B | [[π€ HF Weights]](https://huggingface.co/llama-moe/LLaMA-MoE-v1-3_0B-2_16) |
| **LLaMA-MoE-3.5B (4/16)** | 4 | 16 | 3.5B | [[π€ HF Weights]](https://huggingface.co/llama-moe/LLaMA-MoE-v1-3_5B-4_16) |
| **LLaMA-MoE-3.5B (2/8)** | 2 | 8 | 3.5B | [[π€ HF Weights]](https://huggingface.co/llama-moe/LLaMA-MoE-v1-3_5B-2_8) |
| Model | SciQ | PIQA | WinoGrande | ARC-e | ARC-c (25) | HellaSwag (10) | LogiQA | BoolQ (32) | LAMBADA | NQ (32) | MMLU (5) | Average |
| :------------------------------------------------------------------------------------ | :------: | :------: | :--------: | :------: | :--------: | :------------: | :------: | :--------: | :------: | :------: | :-------: | :-----: |
| [OPT-2.7B](https://huggingface.co/facebook/opt-2.7b) | 78.9 | 74.8 | 60.8 | 54.4 | 34.0 | 61.4 | 25.8 | 63.3 | 63.6 | 10.7 | 25.8 | 50.3 |
| [Pythia-2.8B](https://huggingface.co/EleutherAI/pythia-2.8b) | 83.2 | 73.6 | 59.6 | 58.8 | 36.7 | 60.7 | 28.1 | 65.9 | 64.6 | 8.7 | 26.8 | 51.5 |
| [INCITE-BASE-3B](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1) | 85.6 | 73.9 | 63.5 | 61.7 | 40.3 | 64.7 | 27.5 | 65.8 | 65.4 | 15.2 | 27.2 | 53.7 |
| [Open-LLaMA-3B-v2](https://huggingface.co/openlm-research/open_llama_3b_v2) | 88.0 | 77.9 | 63.1 | 63.3 | 40.1 | 71.4 | 28.1 | 69.2 | 67.4 | 16.0 | 26.8 | 55.6 |
| [Sheared-LLaMA-2.7B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B) | 87.5 | 76.9 | 65.0 | 63.3 | 41.6 | 71.0 | 28.3 | 73.6 | 68.3 | 17.6 | **27.3** | 56.4 |
| **LLaMA-MoE-3.0B** | 84.2 | 77.5 | 63.6 | 60.2 | 40.9 | 70.8 | **30.6** | 71.9 | 66.6 | 17.0 | 26.8 | 55.5 |
| **LLaMA-MoE-3.5B (4/16)** | 87.6 | **77.9** | 65.5 | **65.6** | **44.2** | **73.3** | 29.7 | **75.0** | **69.5** | **20.3** | 26.8 | 57.7 |
| **LLaMA-MoE-3.5B (2/8)** | **88.4** | 77.6 | **66.7** | 65.3 | 43.1 | **73.3** | 29.6 | 73.9 | 69.4 | 19.8 | 27.0 | 57.6 |
## π Details
Training Data: 200B tokens from [SlimPajama](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama) with the same data sampling weights as [Sheared LLaMA](https://arxiv.org/abs/2310.06694).
## π Citation
```bibtex
@article{llama-moe,
title={LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training},
author={Tong Zhu and Xiaoye Qu and Daize Dong and Jiacheng Ruan and Jingqi Tong and Conghui He and Yu Cheng},
journal={arXiv preprint arXiv:2406.16554},
year={2024},
url={https://arxiv.org/abs/2406.16554},
}
``` |
nttx/f81f6930-2109-4185-aa66-d69d178fc81d | nttx | "2025-02-04T10:32:32Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B",
"base_model:adapter:unsloth/SmolLM-1.7B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-04T10:27:03Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f81f6930-2109-4185-aa66-d69d178fc81d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3e2ed20f95d2f384_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3e2ed20f95d2f384_train_data.json
type:
field_input: student_answer
field_instruction: question
field_output: reference_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/f81f6930-2109-4185-aa66-d69d178fc81d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/3e2ed20f95d2f384_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e9396a3b-cad7-4197-aa37-3ad515193e96
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e9396a3b-cad7-4197-aa37-3ad515193e96
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f81f6930-2109-4185-aa66-d69d178fc81d
This model is a fine-tuned version of [unsloth/SmolLM-1.7B](https://huggingface.co/unsloth/SmolLM-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5466 | 0.2793 | 200 | 0.6718 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
victunes/TherapyBeagle-11B-v2 | victunes | "2024-04-14T00:53:51Z" | 54 | 7 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"dataset:victunes/nart-100k-synthetic-buddy-mixed-names",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-13T19:58:38Z" | ---
license: cc-by-nc-4.0
datasets:
- victunes/nart-100k-synthetic-buddy-mixed-names
---
**GGUF:** https://huggingface.co/victunes/TherapyBeagle-11B-v2-GGUF
# TherapyBeagle 11B v2
_Buddy is here for {{user}}._

Trained on top of [vicgalle/CarbonBeagle-11B-truthy](https://huggingface.co/vicgalle/CarbonBeagle-11B-truthy) using a modified version of [jerryjalapeno/nart-100k-synthetic](https://huggingface.co/datasets/jerryjalapeno/nart-100k-synthetic).
TherapyBeagle is _hopefully_ aligned to be helpful, healthy, and comforting.
## Usage
- Do not hold back on TherapyBeagle.
- Open up to TherapyBeagle.
- Pour your heart out to TherapyBeagle.
- Listen to TherapyBeagle.
- Remember that TherapyBeagle is just an AI.
## Notes
- Tested with two formats:
- Solar Instruct format (`\n### User:\n{request}\n### Assistant:\n{response}`)
- Flavorful chat format (`You:{request}\nPersona:{response}`) (Works surprisingly better)
- Check below for examples
- You might be assigned a random name if you don't give yourself one.
## Disclaimer
**TherapyBeagle is NOT a real therapist. It is a friendly AI that mimics empathy and psychotherapy.
It is an illusion without the slightest clue who you are as a person.
As much as it can help you with self-discovery, A BEAGLE IS NOT A SUBSTITUTE to a real professional.**
With that said, I found it to be a very helpful tool in unravelling one's self. I hope this model can help sooth minds, hearts, and any form of mental anguish.
## Examples
### SOLAR INSTRUCT
```
{{[INPUT]}}
Hello, I'm victunes
{{[OUTPUT]}}
Hi victunes! Welcome to our session. I'm here to listen and support you. How are you feeling today?
{{[INPUT]}}
I've been feeling down lately.
{{[OUTPUT]}}
I'm sorry to hear that. Can you tell me more about what's been going on?
{{[INPUT]}}
I think I'm pretty lonely.
{{[OUTPUT]}}
Loneliness can be really tough. It sounds like it's been weighing heavily on your mind. What makes you feel lonely?
{{[INPUT]}}
My friend died recently.
{{[OUTPUT]}}
Oh, I'm so sorry for your loss, victunes. Losing a friend is never easy. How have you been coping with the grief?
```
### SOCRATES CHAT
> **You:** Hello
>
> **Socrates:** Greetings, my friend. How may I assist you today? Remember, the path to wisdom is often paved with questions. Let us delve into the depths of your thoughts together. What is it that troubles you?
>
> **You:** I've been feeling down lately.
>
> **Socrates:** I see. It takes great strength to acknowledge our emotions, especially when they are not positive ones. Can you pinpoint any specific reasons behind this heaviness? Perhaps exploring your relationships or daily routines might shed some light.
>
> **You:** I think I'm pretty lonely.
>
> **Socrates:** Loneliness can be a heavy burden to bear, my dear. Are there people around you whom you could reach out to and strengthen connections with? Sometimes, opening up and sharing our vulnerabilities can lead to profound connections.
>
> **You:** My friend died recently.
>
> **Socrates:** I am truly sorry for your loss. Grief can be overwhelming, and it affects each person uniquely. Take your time to mourn and remember the wonderful moments you shared together. Would you like to reflect on the impact your friend had on your life and explore ways to honor their memory?
### HANNIBAL LECTER CHAT
> **You:** Hello
>
> **Hannibal Lecter:** Good evening. May I assist you?
>
> **You:** I've been feeling down lately.
>
> **Hannibal Lecter:** Ah, the blues. Such a common malady among humans. Tell me more about these feelings. Perhaps I can provide some insights. Or perhaps, a remedy.
>
> **You:** I think I'm pretty lonely.
>
> **Hannibal Lecter:** Loneliness, such an exquisite torture for the mind. Have you considered seeking solace in the company of others? The empathy and warmth of human interaction can be quite soothing, no?
>
> **You:** My friend died recently.
>
> **Hannibal Lecter:** A tragic loss indeed. Please accept my condolences. Death, as Palladio once said, "is the light at the end of the tunnel." But before one reaches that light, there is often a period of darkness and pain. Would you like to talk about your friend and their passing?
**GGUF:** https://huggingface.co/victunes/TherapyBeagle-11B-v2-GGUF |
r3m3c3/english-to-kanji-c8000 | r3m3c3 | "2024-01-31T04:58:32Z" | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-01-31T04:57:21Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 𧨠diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RayneAmes/goteem_v3 | RayneAmes | "2025-02-09T23:10:55Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-09T23:08:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/SauerkrautLM-Gemma-2b-GGUF | mradermacher | "2024-12-19T22:01:14Z" | 14 | 0 | transformers | [
"transformers",
"gguf",
"sft",
"laserRMT",
"laser-QLoRa",
"finetune",
"work in progress",
"alpha",
"de",
"en",
"base_model:VAGOsolutions/SauerkrautLM-Gemma-2b",
"base_model:quantized:VAGOsolutions/SauerkrautLM-Gemma-2b",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-12-19T18:35:44Z" | ---
base_model: VAGOsolutions/SauerkrautLM-Gemma-2b
language:
- de
- en
library_name: transformers
license: other
license_link: https://ai.google.dev/gemma/terms
license_name: gemma-terms-of-use
quantized_by: mradermacher
tags:
- sft
- laserRMT
- laser-QLoRa
- finetune
- work in progress
- alpha
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/VAGOsolutions/SauerkrautLM-Gemma-2b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SauerkrautLM-Gemma-2b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-Gemma-2b-GGUF/resolve/main/SauerkrautLM-Gemma-2b.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-Gemma-2b-GGUF/resolve/main/SauerkrautLM-Gemma-2b.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-Gemma-2b-GGUF/resolve/main/SauerkrautLM-Gemma-2b.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-Gemma-2b-GGUF/resolve/main/SauerkrautLM-Gemma-2b.Q3_K_L.gguf) | Q3_K_L | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-Gemma-2b-GGUF/resolve/main/SauerkrautLM-Gemma-2b.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-Gemma-2b-GGUF/resolve/main/SauerkrautLM-Gemma-2b.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-Gemma-2b-GGUF/resolve/main/SauerkrautLM-Gemma-2b.Q4_K_M.gguf) | Q4_K_M | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-Gemma-2b-GGUF/resolve/main/SauerkrautLM-Gemma-2b.Q5_K_S.gguf) | Q5_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-Gemma-2b-GGUF/resolve/main/SauerkrautLM-Gemma-2b.Q5_K_M.gguf) | Q5_K_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-Gemma-2b-GGUF/resolve/main/SauerkrautLM-Gemma-2b.Q6_K.gguf) | Q6_K | 2.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-Gemma-2b-GGUF/resolve/main/SauerkrautLM-Gemma-2b.Q8_0.gguf) | Q8_0 | 2.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-Gemma-2b-GGUF/resolve/main/SauerkrautLM-Gemma-2b.f16.gguf) | f16 | 5.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
dathudeptrai/tts-tacotron2-synpaflex-fr | dathudeptrai | "2021-08-12T13:07:20Z" | 0 | 1 | tensorflowtts | [
"tensorflowtts",
"audio",
"text-to-speech",
"text-to-mel",
"fr",
"dataset:synpaflex",
"arxiv:1712.05884",
"arxiv:1710.08969",
"license:apache-2.0",
"region:us"
] | text-to-speech | "2022-03-02T23:29:05Z" | ---
tags:
- tensorflowtts
- audio
- text-to-speech
- text-to-mel
language: fr
license: apache-2.0
datasets:
- synpaflex
widget:
- text: "Oh, je voudrais tant que tu te souviennes Des jours heureux quand nous Γ©tions amis"
---
# Tacotron 2 with Guided Attention trained on Synpaflex (Fr)
This repository provides a pretrained [Tacotron2](https://arxiv.org/abs/1712.05884) trained with [Guided Attention](https://arxiv.org/abs/1710.08969) on Synpaflex dataset (Fr). For a detail of the model, we encourage you to read more about
[TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS).
## Install TensorFlowTTS
First of all, please install TensorFlowTTS with the following command:
```
pip install TensorFlowTTS
```
### Converting your Text to Mel Spectrogram
```python
import numpy as np
import soundfile as sf
import yaml
import tensorflow as tf
from tensorflow_tts.inference import AutoProcessor
from tensorflow_tts.inference import TFAutoModel
processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-synpaflex-fr")
tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-synpaflex-fr")
text = "Oh, je voudrais tant que tu te souviennes Des jours heureux quand nous Γ©tions amis"
input_ids = processor.text_to_sequence(text)
decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32),
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
)
```
#### Referencing Tacotron 2
```
@article{DBLP:journals/corr/abs-1712-05884,
author = {Jonathan Shen and
Ruoming Pang and
Ron J. Weiss and
Mike Schuster and
Navdeep Jaitly and
Zongheng Yang and
Zhifeng Chen and
Yu Zhang and
Yuxuan Wang and
R. J. Skerry{-}Ryan and
Rif A. Saurous and
Yannis Agiomyrgiannakis and
Yonghui Wu},
title = {Natural {TTS} Synthesis by Conditioning WaveNet on Mel Spectrogram
Predictions},
journal = {CoRR},
volume = {abs/1712.05884},
year = {2017},
url = {http://arxiv.org/abs/1712.05884},
archivePrefix = {arXiv},
eprint = {1712.05884},
timestamp = {Thu, 28 Nov 2019 08:59:52 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1712-05884.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
#### Referencing TensorFlowTTS
```
@misc{TFTTS,
author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata,
Trinh Le and Yunchao He},
title = {TensorflowTTS},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}},
}
``` |
featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF | featherless-ai-quants | "2024-11-10T19:41:46Z" | 41 | 0 | null | [
"gguf",
"text-generation",
"base_model:grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B",
"base_model:quantized:grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-11-05T22:25:46Z" | ---
base_model: grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-GGUF/blob/main/grimjim-Llama-3.1-SuperNova-Lite-lorabilterated-8B-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
irishprancer/5e1fbbbb-5be0-493b-9011-dc7fbc41d03f | irishprancer | "2025-02-25T11:29:16Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-25T05:38:10Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
suchitg/sae-compression-gpt-2-small-pruned-sae-pile_0.25 | suchitg | "2025-03-23T22:20:21Z" | 0 | 0 | saelens | [
"saelens",
"region:us"
] | null | "2025-03-23T22:16:47Z" | ---
library_name: saelens
---
# SAEs for use with the SAELens library
This repository contains the following SAEs:
- blocks.0.hook_resid_post
- blocks.1.hook_resid_post
- blocks.2.hook_resid_post
- blocks.3.hook_resid_post
- blocks.4.hook_resid_post
- blocks.5.hook_resid_post
- blocks.6.hook_resid_post
- blocks.7.hook_resid_post
- blocks.8.hook_resid_post
- blocks.9.hook_resid_post
- blocks.10.hook_resid_post
- blocks.11.hook_resid_post
- blocks.0.hook_mlp_out
- blocks.1.hook_mlp_out
- blocks.2.hook_mlp_out
- blocks.3.hook_mlp_out
- blocks.4.hook_mlp_out
- blocks.5.hook_mlp_out
- blocks.6.hook_mlp_out
- blocks.7.hook_mlp_out
- blocks.8.hook_mlp_out
- blocks.9.hook_mlp_out
- blocks.10.hook_mlp_out
- blocks.11.hook_mlp_out
- blocks.0.attn.hook_z
- blocks.1.attn.hook_z
- blocks.2.attn.hook_z
- blocks.3.attn.hook_z
- blocks.4.attn.hook_z
- blocks.5.attn.hook_z
- blocks.6.attn.hook_z
- blocks.7.attn.hook_z
- blocks.8.attn.hook_z
- blocks.9.attn.hook_z
- blocks.10.attn.hook_z
- blocks.11.attn.hook_z
Load these SAEs using SAELens as below:
```python
from sae_lens import SAE
sae, cfg_dict, sparsity = SAE.from_pretrained("suchitg/sae-compression-gpt-2-small-pruned-sae-pile_0.25", "<sae_id>")
``` |
s3nh/Cerebras-GPT-590M-3000steps-polish | s3nh | "2023-05-04T09:22:42Z" | 5 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"pl",
"dataset:databricks/databricks-dolly-15k",
"dataset:s3nh/alpaca-dolly-instruction-only-polish",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-05-02T14:42:51Z" | ---
license: openrail
datasets:
- databricks/databricks-dolly-15k
- s3nh/alpaca-dolly-instruction-only-polish
language:
- pl
---
### Introduction
These repository consist of cerebras/Cerebras-GPT-590M finetuned to Polish language on translated alpaca-dolly dataset.
Main task is to perform accurate answers to instruction asked.
Below you can find an instruction of how to infer with that model.
These repository does not contain an tokenizer object, at the moment (#TODO).
### Evaluation part
```python
import pandas as pd
import torch
from torch.utils.data import AutTokenizer
from typing import List, Dict, Union
from typing import Any, TypeVar
import pandas as pd
import pickle
MODEL_NAME: str = 's3nh/Cerebras-GPT-590M-3000steps-polish'
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCasualLM.from_pretrained(MODEL_NAME).cuda()
#Resize model for tokenizer size
n_tokens: int = len(tokenizer)
model.resize_token_embeddings(n_tokens)
def _generate_prompt(instruction, input=None):
if input:
return f"""PoniΕΌej znajduje siΔ instrukcja opisujΔ
ca zadanie, poΕΔ
czona z danymi wejΕciowymi, ktΓ³re zapewniajΔ
dalszy konktekst. Napisz odpowiedΕΊ, ktΓ³ra odpowiednio odpowie na pytanie.
### Instruction:
{instruction}
### Input:
{input}
### Response:"""
manual_instruction: str = "Napisz mi proszΔ jakie sΔ
rodzaje telefonΓ³w komΓ³rkowych"
manual_input: str = "Telefony komΓ³rkowe, w przeciwieΕstwie do np. satelitarnych, charakteryzuje to, ΕΌe dziaΕajΔ
w obrΔbie naziemnych fal radiowych w technologii GSM (i w rΓ³ΕΌnych jej wariantach: 3G, 4G czy niebawem 5G). Zasadniczo moΕΌna jednak wyrΓ³ΕΌniΔ wiele ich rodzajΓ³w i podzieliΔ je na rΓ³ΕΌne kryteria. I tak, ze wzglΔdu na rodzaj obudowy, moΕΌna mΓ³wiΔ o telefonach jednobryΕowych, rozsuwanych, obrotowych czy teΕΌ z klapkΔ
. Obecnie jednak najbardziej popularne i β ze wzglΔdu na posiadane parametry β najlepsze telefony komΓ³rkowe to smartfony dotykowe."
print(f"Valueation for {manual_instruction} \n\n\n {manual_input}\n\n")
evaluate(instruction = manual_instruction,
input = manual_input)
``` |
mradermacher/KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF | mradermacher | "2024-05-05T15:08:55Z" | 98 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"kaist-ai/mistral-orpo-capybara-7k",
"argilla/CapybaraHermes-2.5-Mistral-7B",
"en",
"base_model:Yuma42/KangalKhan-Alpha-Sapphiroid-7B-Fixed",
"base_model:quantized:Yuma42/KangalKhan-Alpha-Sapphiroid-7B-Fixed",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-04-25T19:54:17Z" | ---
base_model: Yuma42/KangalKhan-Alpha-Sapphiroid-7B-Fixed
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- kaist-ai/mistral-orpo-capybara-7k
- argilla/CapybaraHermes-2.5-Mistral-7B
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Yuma42/KangalKhan-Alpha-Sapphiroid-7B-Fixed
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B-Fixed.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B-Fixed.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B-Fixed.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B-Fixed.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B-Fixed.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B-Fixed.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B-Fixed.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B-Fixed.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B-Fixed.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B-Fixed.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B-Fixed.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B-Fixed.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B-Fixed.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B-Fixed.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/KangalKhan-Alpha-Sapphiroid-7B-Fixed-GGUF/resolve/main/KangalKhan-Alpha-Sapphiroid-7B-Fixed.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
aho-tai/TrOCR-224-small-init | aho-tai | "2025-02-19T17:00:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-02-19T16:59:48Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
0xfaskety/task-1-microsoft-Phi-3-mini-4k-instruct | 0xfaskety | "2025-01-23T13:00:20Z" | 13,471 | 1 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"region:us"
] | null | "2025-01-02T03:56:03Z" | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
mradermacher/MXLewd-L2-20B-i1-GGUF | mradermacher | "2024-05-06T05:50:04Z" | 370 | 2 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Undi95/MXLewd-L2-20B",
"base_model:quantized:Undi95/MXLewd-L2-20B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-03-28T06:12:28Z" | ---
base_model: Undi95/MXLewd-L2-20B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
weighted/imatrix quants of https://huggingface.co/Undi95/MXLewd-L2-20B
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ1_S.gguf) | i1-IQ1_S | 4.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ2_S.gguf) | i1-IQ2_S | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ2_M.gguf) | i1-IQ2_M | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q2_K.gguf) | i1-Q2_K | 7.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ3_S.gguf) | i1-IQ3_S | 9.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ3_M.gguf) | i1-IQ3_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 11.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q4_0.gguf) | i1-Q4_0 | 11.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 14.1 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q6_K.gguf) | i1-Q6_K | 16.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DSR-UF/Graph-Aware-PretrainedLM | DSR-UF | "2023-03-16T00:06:37Z" | 0 | 1 | transformers | [
"transformers",
"NLG",
"pytorch",
"BART",
"Graph-to-Text",
"Knowledge Graph",
"en",
"dataset:WebNLG",
"dataset:EventNarrative",
"arxiv:2204.06674",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2023-03-15T22:53:13Z" | ---
language:
- en
tags:
- NLG
- pytorch
- transformers
- BART
- Graph-to-Text
- Knowledge Graph
license: apache-2.0
datasets:
- WebNLG
- EventNarrative
---
# Model Description
We release our best performing models for the WebNLG and EventNarrative datasets from the paper GAP: *A Graph-aware Language Model Framework for
Knowledge Graph-to-Text Generation*. Our model is intended to be used on knowledge graphs in order to narrate their contents, giving a verbalization of the structured data.
# Paper
Please see our paper [here](https://arxiv.org/abs/2204.06674).
# Citation
If you found this model useful, please consider citing our paper:
```
@inproceedings{colas-etal-2022-gap,
title = "{GAP}: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation",
author = "Colas, Anthony and
Alvandipour, Mehrdad and
Wang, Daisy Zhe",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.506",
pages = "5755--5769"
}
```
# GitHub repo
Please see our GitHub [here](https://github.com/acolas1/GAP_COLING2022). |
sail-rvc/haechanmodel2333333 | sail-rvc | "2023-07-14T07:38:00Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:37:46Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# haechanmodel2333333
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:38:00
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
tpcimori/ai-creator-idol | tpcimori | "2023-02-20T07:51:07Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-02-20T07:51:07Z" | ---
license: creativeml-openrail-m
---
|
skarsa/babe_topic_subsamples_model_alpha_0_05_idx_3 | skarsa | "2025-02-11T13:54:54Z" | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-15T19:35:08Z" | ---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: babe_topic_subsamples_model_alpha_0_05_idx_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# babe_topic_subsamples_model_alpha_0_05_idx_3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
KogoroTop/t1zzzV113 | KogoroTop | "2025-02-12T17:22:09Z" | 0 | 0 | null | [
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-02-12T17:16:10Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LoneStriker/Qwen1.5-4B-Chat-5.0bpw-h6-exl2 | LoneStriker | "2024-02-17T14:05:26Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2309.16609",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-17T14:04:06Z" | ---
license: other
license_name: tongyi-qianwen-research
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-4B-Chat/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen1.5-4B-Chat
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B;
* Significant performance improvement in human preference for chat models;
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
<br>
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. However, DPO leads to improvements in human preference evaluation but degradation in benchmark evaluation. In the very near future, we will fix both problems.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen1.5-4B-Chat",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-4B-Chat")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-4B-Chat-GPTQ-Int4`, `Qwen1.5-4B-Chat-GPTQ-Int8`, `Qwen1.5-4B-Chat-AWQ`, and `Qwen1.5-4B-Chat-GGUF`.
## Tips
* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
``` |
codys12/Hermes-2-Pro-Llama-3-8B | codys12 | "2024-05-02T23:39:00Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:finetune:NousResearch/Meta-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-02T23:38:59Z" | ---
base_model: NousResearch/Meta-Llama-3-8B
tags:
- Llama-3
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
model-index:
- name: Hermes-2-Pro-Llama-3-8B
results: []
license: apache-2.0
language:
- en
datasets:
- teknium/OpenHermes-2.5
widget:
- example_title: Hermes 2 Pro
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.
---
# Hermes 2 Pro - Llama-3 8B

## Model Description
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.
Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.
This version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - `<tools>`, `<tool_call>`, `<tool_response>` and their closing tags are single tokens now.
This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI
Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling
## Example Outputs
### Ask for a structured JSON output:

### Write the plot for a story where anime became real life:

### Coding Assistance

# Prompt Format
Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks

## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5520|Β± |0.0145|
| | |acc_norm|0.5887|Β± |0.0144|
|arc_easy | 0|acc |0.8350|Β± |0.0076|
| | |acc_norm|0.8123|Β± |0.0080|
|boolq | 1|acc |0.8584|Β± |0.0061|
|hellaswag | 0|acc |0.6265|Β± |0.0048|
| | |acc_norm|0.8053|Β± |0.0040|
|openbookqa | 0|acc |0.3800|Β± |0.0217|
| | |acc_norm|0.4580|Β± |0.0223|
|piqa | 0|acc |0.8003|Β± |0.0093|
| | |acc_norm|0.8118|Β± |0.0091|
|winogrande | 0|acc |0.7490|Β± |0.0122|
```
Average: 72.62
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2520|Β± |0.0273|
| | |acc_norm|0.2559|Β± |0.0274|
|agieval_logiqa_en | 0|acc |0.3548|Β± |0.0188|
| | |acc_norm|0.3625|Β± |0.0189|
|agieval_lsat_ar | 0|acc |0.1826|Β± |0.0255|
| | |acc_norm|0.1913|Β± |0.0260|
|agieval_lsat_lr | 0|acc |0.5510|Β± |0.0220|
| | |acc_norm|0.5255|Β± |0.0221|
|agieval_lsat_rc | 0|acc |0.6431|Β± |0.0293|
| | |acc_norm|0.6097|Β± |0.0298|
|agieval_sat_en | 0|acc |0.7330|Β± |0.0309|
| | |acc_norm|0.7039|Β± |0.0319|
|agieval_sat_en_without_passage| 0|acc |0.4029|Β± |0.0343|
| | |acc_norm|0.3689|Β± |0.0337|
|agieval_sat_math | 0|acc |0.3909|Β± |0.0330|
| | |acc_norm|0.3773|Β± |0.0328|
```
Average: 42.44
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5737|Β± |0.0360|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6667|Β± |0.0246|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|Β± |0.0290|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.1755|Β± |0.0201|
| | |exact_str_match |0.0000|Β± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|Β± |0.0207|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2014|Β± |0.0152|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5500|Β± |0.0288|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4300|Β± |0.0222|
|bigbench_navigate | 0|multiple_choice_grade|0.4980|Β± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7010|Β± |0.0102|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4688|Β± |0.0236|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1974|Β± |0.0126|
|bigbench_snarks | 0|multiple_choice_grade|0.7403|Β± |0.0327|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5426|Β± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.5320|Β± |0.0158|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2280|Β± |0.0119|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1531|Β± |0.0086|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5500|Β± |0.0288|
```
Average: 43.55
## TruthfulQA:
```
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.410|Β± |0.0172|
| | |mc2 |0.578|Β± |0.0157|
```
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Llama-3-8B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Pro-Llama-3-8B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Pro-Llama-3-8B,
url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B]https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)},
title={Hermes-2-Pro-Llama-3-8B},
author={"Teknium", "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
```
|
Lukee4/biogpt-2019_3labels | Lukee4 | "2023-08-06T09:51:40Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-06T09:51:38Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
fine-tuned/NFCorpus-32000-384-gpt-4o-2024-05-13-81211802 | fine-tuned | "2024-06-06T07:45:41Z" | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/NFCorpus-32000-384-gpt-4o-2024-05-13-81211802",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-06-06T07:45:27Z" | ---
license: apache-2.0
datasets:
- fine-tuned/NFCorpus-32000-384-gpt-4o-2024-05-13-81211802
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-small-en-v1.5**](https://huggingface.co/BAAI/bge-small-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/NFCorpus-32000-384-gpt-4o-2024-05-13-81211802',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
cemozkaynak/murat | cemozkaynak | "2024-08-26T10:42:06Z" | 6 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-08-26T10:16:48Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
instance_prompt: MURAT
---
# Murat
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MURAT` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('cemozkaynak/murat', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
hyobi18220/jam_krx_gemma_v6 | hyobi18220 | "2024-12-03T11:41:15Z" | 6 | 0 | null | [
"safetensors",
"gemma2",
"krx",
"base_model:unsloth/gemma-2-9b-it",
"base_model:finetune:unsloth/gemma-2-9b-it",
"region:us"
] | null | "2024-12-03T11:27:48Z" | ---
base_model:
- unsloth/gemma-2-9b-it
tags:
- krx
--- |
EpistemeAI/DeepThink-Phi4 | EpistemeAI | "2025-02-28T19:48:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/phi-4-bnb-4bit",
"base_model:finetune:unsloth/phi-4-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-28T19:21:30Z" | ---
base_model: unsloth/phi-4-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Model Summary
DeepThink-Phi4 is reasoning open model that is GRPO reward tuning with openai/gsm8k dataset. It improves reasoning capability.
# Model Inroduction from Phi-4
`phi-4` is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.<br><br>`phi-4` underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures
## Usage (from Phi-4 model card)
### Input Formats
Given the nature of the training data, `phi-4` is best suited for prompts using the chat format as follows:
```bash
<|im_start|>system<|im_sep|>
You are a medieval knight and must provide explanations to modern people. Respond in the following format:
<reasoning>
...
</reasoning>
<answer>
...
</answer><|im_end|>
<|im_start|>user<|im_sep|>
How should I explain the Internet?<|im_end|>
<|im_start|>assistant<|im_sep|>
```
### With `transformers`
```python
import transformers
pipeline = transformers.pipeline(
"text-generation",
model="EpistemeAI/DeepThink-Phi4",
model_kwargs={"torch_dtype": "auto"},
device_map="auto",
)
messages = [
{"role": "system", "content": """You are a medieval knight and must provide explanations to modern people. Respond in the following format:
<reasoning>
...
</reasoning>
<answer>
...
</answer>"""},
{"role": "user", "content": "How should I explain the Internet?"},
]
outputs = pipeline(messages, max_new_tokens=128)
print(outputs[0]["generated_text"][-1])
```
## Responsible AI Considerations
Like other language models, `phi-4` can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
* **Quality of Service:** The model is trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. `phi-4` is not intended to support multilingual use.
* **Representation of Harms & Perpetuation of Stereotypes:** These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
* **Inappropriate or Offensive Content:** These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
* **Information Reliability:** Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
* **Limited Scope for Code:** Majority of `phi-4` training data is based in Python and uses common packages such as `typing`, `math`, `random`, `collections`, `datetime`, `itertools`. If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Using safety services like [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety) that have advanced guardrails is highly recommended. Important areas for consideration include:
* **Allocation:** Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
* **High-Risk Scenarios:** Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
* **Misinformation:** Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
* **Generation of Harmful Content:** Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
* **Misuse:** Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
# Uploaded model
- **Developed by:** EpistemeAI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bartowski/Hyperion-2.1-Mistral-7B-GGUF | bartowski | "2024-03-12T03:39:34Z" | 110 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"en",
"dataset:Locutusque/hyperion-v2.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-11T01:10:12Z" | ---
library_name: transformers
license: apache-2.0
datasets:
- Locutusque/hyperion-v2.0
language:
- en
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp Quantizations of Hyperion-2.1-Mistral-7B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2354">b2354</a> for quantization.
Original model: https://huggingface.co/Locutusque/Hyperion-2.1-Mistral-7B
Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Hyperion-2.1-Mistral-7B-Q8_0.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF/blob/main/Hyperion-2.1-Mistral-7B-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
| [Hyperion-2.1-Mistral-7B-Q6_K.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF/blob/main/Hyperion-2.1-Mistral-7B-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
| [Hyperion-2.1-Mistral-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF/blob/main/Hyperion-2.1-Mistral-7B-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. |
| [Hyperion-2.1-Mistral-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF/blob/main/Hyperion-2.1-Mistral-7B-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. |
| [Hyperion-2.1-Mistral-7B-Q5_0.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF/blob/main/Hyperion-2.1-Mistral-7B-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. |
| [Hyperion-2.1-Mistral-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF/blob/main/Hyperion-2.1-Mistral-7B-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, similar to 4.25 bpw. |
| [Hyperion-2.1-Mistral-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF/blob/main/Hyperion-2.1-Mistral-7B-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. |
| [Hyperion-2.1-Mistral-7B-Q4_0.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF/blob/main/Hyperion-2.1-Mistral-7B-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. |
| [Hyperion-2.1-Mistral-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF/blob/main/Hyperion-2.1-Mistral-7B-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
| [Hyperion-2.1-Mistral-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF/blob/main/Hyperion-2.1-Mistral-7B-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
| [Hyperion-2.1-Mistral-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF/blob/main/Hyperion-2.1-Mistral-7B-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
| [Hyperion-2.1-Mistral-7B-Q2_K.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF/blob/main/Hyperion-2.1-Mistral-7B-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
stablediffusionapi/bx-astreapixie | stablediffusionapi | "2024-09-11T16:50:43Z" | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-09-11T16:48:02Z" | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# BX-AstreaPixie API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "bx-astreapixie"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/bx-astreapixie)
Model link: [View model](https://modelslab.com/models/bx-astreapixie)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "bx-astreapixie",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Cheng98/opt-350m-boolq | Cheng98 | "2024-04-08T21:06:51Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:facebook/opt-350m",
"base_model:finetune:facebook/opt-350m",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-06T09:19:10Z" | ---
language:
- en
license: other
base_model: facebook/opt-350m
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: opt-350m-boolq
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE BOOLQ
type: glue
args: boolq
metrics:
- name: Accuracy
type: accuracy
value: 0.6975535168195719
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m-boolq
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the GLUE BOOLQ dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2006
- Accuracy: 0.6976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.18.0
- Tokenizers 0.13.3
|
coffiee/dl28 | coffiee | "2025-02-17T08:22:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-17T08:21:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned_Epistemic_tiny_0.0_Seed105 | bmehrba | "2024-04-22T20:04:03Z" | 2 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | "2024-04-22T20:03:59Z" | ---
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
mradermacher/Tsunami-0.5x-7B-Instruct-GGUF | mradermacher | "2024-10-16T07:51:05Z" | 23 | 1 | transformers | [
"transformers",
"gguf",
"th",
"en",
"base_model:Tsunami-th/Tsunami-0.5x-7B-Instruct",
"base_model:quantized:Tsunami-th/Tsunami-0.5x-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-10-16T05:34:21Z" | ---
base_model: Tsunami-th/Tsunami-0.5x-7B-Instruct
language:
- th
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Tsunami-th/Tsunami-0.5x-7B-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Tsunami-0.5x-7B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tsunami-0.5x-7B-Instruct-GGUF/resolve/main/Tsunami-0.5x-7B-Instruct.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-0.5x-7B-Instruct-GGUF/resolve/main/Tsunami-0.5x-7B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-0.5x-7B-Instruct-GGUF/resolve/main/Tsunami-0.5x-7B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-0.5x-7B-Instruct-GGUF/resolve/main/Tsunami-0.5x-7B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-0.5x-7B-Instruct-GGUF/resolve/main/Tsunami-0.5x-7B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-0.5x-7B-Instruct-GGUF/resolve/main/Tsunami-0.5x-7B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-0.5x-7B-Instruct-GGUF/resolve/main/Tsunami-0.5x-7B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-0.5x-7B-Instruct-GGUF/resolve/main/Tsunami-0.5x-7B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-0.5x-7B-Instruct-GGUF/resolve/main/Tsunami-0.5x-7B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-0.5x-7B-Instruct-GGUF/resolve/main/Tsunami-0.5x-7B-Instruct.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-0.5x-7B-Instruct-GGUF/resolve/main/Tsunami-0.5x-7B-Instruct.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-0.5x-7B-Instruct-GGUF/resolve/main/Tsunami-0.5x-7B-Instruct.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ChenYang88/chentest0 | ChenYang88 | "2024-05-22T01:56:49Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-22T01:56:49Z" | ---
license: apache-2.0
---
|
shylane/Llama-3.1-Nemotron-Nano-8B-v1-Q6_K-GGUF | shylane | "2025-03-18T21:42:14Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"base_model:quantized:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-03-18T21:41:42Z" | ---
base_model: nvidia/Llama-3.1-Nemotron-Nano-8B-v1
language:
- en
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
tags:
- nvidia
- llama-3
- pytorch
- llama-cpp
- gguf-my-repo
---
# shylane/Llama-3.1-Nemotron-Nano-8B-v1-Q6_K-GGUF
This model was converted to GGUF format from [`nvidia/Llama-3.1-Nemotron-Nano-8B-v1`](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo shylane/Llama-3.1-Nemotron-Nano-8B-v1-Q6_K-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo shylane/Llama-3.1-Nemotron-Nano-8B-v1-Q6_K-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo shylane/Llama-3.1-Nemotron-Nano-8B-v1-Q6_K-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo shylane/Llama-3.1-Nemotron-Nano-8B-v1-Q6_K-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q6_k.gguf -c 2048
```
|
Gnider/ruroberta_lit_classif_shit | Gnider | "2024-06-25T15:57:11Z" | 4 | 0 | transformers | [
"transformers",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-06-24T19:20:12Z" | MAX_LEN = 256
TRAIN_BATCH_SIZE = 32
VALID_BATCH_SIZE = 32
EPOCHS = 6
LEARNING_RATE = 1e-05
Epoch 1/6, Training Loss: 2.3895558558977568, Training Accuracy: 0.25198364991584515, Validation Loss: 2.2213363370725085, Validation Accuracy: 0.2675266404935502
Epoch 2/6, Training Loss: 2.170463297000298, Training Accuracy: 0.30704496273142584, Validation Loss: 2.094081638114793, Validation Accuracy: 0.30061693774537296
Epoch 3/6, Training Loss: 2.085676504098452, Training Accuracy: 0.3193075258475595, Validation Loss: 2.1937254539557864, Validation Accuracy: 0.2675266404935502
Epoch 4/6, Training Loss: 2.0049061050781836, Training Accuracy: 0.3450348641500361, Validation Loss: 1.9938753971031733, Validation Accuracy: 0.34099831744251263
Epoch 5/6, Training Loss: 1.9397827350176298, Training Accuracy: 0.38494830488098103, Validation Loss: 1.9563701684985841, Validation Accuracy: 0.34604598990465507
Epoch 6/6, Training Loss: 1.8827617599413946, Training Accuracy: 0.40081750420774226, Validation Loss: 1.8399504039968764, Validation Accuracy: 0.40437464946719015 |
luiz-and-robert-thesis/all-mpnet-base-v2-lr-5e-7-margin-1-epoch-1 | luiz-and-robert-thesis | "2024-04-05T20:21:21Z" | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-04-05T20:20:52Z" | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# luiz-and-robert-thesis/all-mpnet-base-v2-lr-5e-7-margin-1-epoch-1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('luiz-and-robert-thesis/all-mpnet-base-v2-lr-5e-7-margin-1-epoch-1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=luiz-and-robert-thesis/all-mpnet-base-v2-lr-5e-7-margin-1-epoch-1)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5885 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.COSINE', 'triplet_margin': 1}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 5e-07
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 882,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
MHanzl/tf_efficientnet_b3.in1k_ft_df24m_224 | MHanzl | "2024-07-30T11:03:30Z" | 6 | 0 | DanishFungi | [
"DanishFungi",
"pytorch",
"image-classification",
"ecology",
"fungi",
"FGVC",
"license:cc-by-nc-4.0",
"region:us"
] | image-classification | "2024-07-30T11:03:23Z" |
---
tags:
- image-classification
- ecology
- fungi
- FGVC
library_name: DanishFungi
license: cc-by-nc-4.0
---
# Model card for MHanzl/tf_efficientnet_b3.in1k_ft_df24m_224
## Model Details
- **Model Type:** Danish Fungi Classification
- **Model Stats:**
- Params (M): 11.0
- Image size: 224 x 224
- **Papers:**
- **Original:** ??
- **Train Dataset:** DF24M --> https://sites.google.com/view/danish-fungi-dataset
## Model Usage
### Image Embeddings
```python
import timm
import torch
import torchvision.transforms as T
from PIL import Image
from urllib.request import urlopen
model = timm.create_model("hf-hub:MHanzl/tf_efficientnet_b3.in1k_ft_df24m_224", pretrained=True)
model = model.eval()
train_transforms = T.Compose([T.Resize((224, 224)),
T.ToTensor(),
T.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
img = Image.open(PATH_TO_YOUR_IMAGE)
output = model(train_transforms(img).unsqueeze(0))
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@InProceedings{Picek_2022_WACV,
author = {Picek, Luk'a{s} and {S}ulc, Milan and Matas, Ji{r}{'\i} and Jeppesen, Thomas S. and Heilmann-Clausen, Jacob and L{e}ss{\o}e, Thomas and Fr{\o}slev, Tobias},
title = {Danish Fungi 2020 - Not Just Another Image Recognition Dataset},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2022},
pages = {1525-1535}
}
```
```bibtex
@article{picek2022automatic,
title={Automatic Fungi Recognition: Deep Learning Meets Mycology},
author={Picek, Luk{'a}{{s}} and {{S}}ulc, Milan and Matas, Ji{{r}}{'\i} and Heilmann-Clausen, Jacob and Jeppesen, Thomas S and Lind, Emil},
journal={Sensors},
volume={22},
number={2},
pages={633},
year={2022},
publisher={Multidisciplinary Digital Publishing Institute}
}
```
|
mradermacher/Stheno-Mega-False-49B-L2-GGUF | mradermacher | "2024-06-05T13:34:02Z" | 2 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/Stheno-Mega-False-49B-L2",
"base_model:quantized:Sao10K/Stheno-Mega-False-49B-L2",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | "2024-06-04T21:07:44Z" | ---
base_model: Sao10K/Stheno-Mega-False-49B-L2
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Sao10K/Stheno-Mega-False-49B-L2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-GGUF/resolve/main/Stheno-Mega-False-49B-L2.Q2_K.gguf) | Q2_K | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-GGUF/resolve/main/Stheno-Mega-False-49B-L2.IQ3_XS.gguf) | IQ3_XS | 20.3 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-GGUF/resolve/main/Stheno-Mega-False-49B-L2.IQ3_S.gguf) | IQ3_S | 21.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-GGUF/resolve/main/Stheno-Mega-False-49B-L2.Q3_K_S.gguf) | Q3_K_S | 21.4 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-GGUF/resolve/main/Stheno-Mega-False-49B-L2.IQ3_M.gguf) | IQ3_M | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-GGUF/resolve/main/Stheno-Mega-False-49B-L2.Q3_K_M.gguf) | Q3_K_M | 24.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-GGUF/resolve/main/Stheno-Mega-False-49B-L2.Q3_K_L.gguf) | Q3_K_L | 26.4 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-GGUF/resolve/main/Stheno-Mega-False-49B-L2.IQ4_XS.gguf) | IQ4_XS | 26.7 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-GGUF/resolve/main/Stheno-Mega-False-49B-L2.Q4_K_S.gguf) | Q4_K_S | 28.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-GGUF/resolve/main/Stheno-Mega-False-49B-L2.Q4_K_M.gguf) | Q4_K_M | 29.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-GGUF/resolve/main/Stheno-Mega-False-49B-L2.Q5_K_S.gguf) | Q5_K_S | 34.2 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-GGUF/resolve/main/Stheno-Mega-False-49B-L2.Q5_K_M.gguf) | Q5_K_M | 35.1 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-GGUF/resolve/main/Stheno-Mega-False-49B-L2.Q6_K.gguf) | Q6_K | 40.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-GGUF/resolve/main/Stheno-Mega-False-49B-L2.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-GGUF/resolve/main/Stheno-Mega-False-49B-L2.Q8_0.gguf.part2of2) | Q8_0 | 52.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lesso01/3f0fee1d-ff19-4446-95ae-7ffd554cd307 | lesso01 | "2025-03-25T23:21:44Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"base_model:adapter:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"region:us"
] | null | "2025-03-25T20:50:05Z" | ---
library_name: peft
license: apache-2.0
base_model: 01-ai/Yi-1.5-9B-Chat-16K
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3f0fee1d-ff19-4446-95ae-7ffd554cd307
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: 01-ai/Yi-1.5-9B-Chat-16K
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 404ace6a3524de15_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/404ace6a3524de15_train_data.json
type:
field_input: Example
field_instruction: '@members'
field_output: Definition
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso01/3f0fee1d-ff19-4446-95ae-7ffd554cd307
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000201
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/404ace6a3524de15_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 10
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eb768b7b-ba0a-40ac-be01-5d7d292c6239
wandb_project: 01a
wandb_run: your_name
wandb_runid: eb768b7b-ba0a-40ac-be01-5d7d292c6239
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3f0fee1d-ff19-4446-95ae-7ffd554cd307
This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000201
- train_batch_size: 4
- eval_batch_size: 4
- seed: 10
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 4.7949 |
| 1.7221 | 0.1411 | 500 | 1.6986 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
cky911/deepseek_sql_model | cky911 | "2025-02-12T09:06:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-12T09:06:37Z" | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** cky911
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LarryAIDraw/shining_arknights | LarryAIDraw | "2023-09-10T16:24:09Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-09-10T16:16:13Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/130929/shining-arknights |
Triangle104/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-Q8_0-GGUF | Triangle104 | "2025-01-27T12:44:46Z" | 2,856 | 0 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"base_model:huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2",
"base_model:quantized:huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-27T12:43:38Z" | ---
base_model: huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2
library_name: transformers
tags:
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-Q8_0-GGUF
This model was converted to GGUF format from [`huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2`](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-Q8_0-GGUF --hf-file deepseek-r1-distill-qwen-14b-abliterated-v2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-Q8_0-GGUF --hf-file deepseek-r1-distill-qwen-14b-abliterated-v2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-Q8_0-GGUF --hf-file deepseek-r1-distill-qwen-14b-abliterated-v2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-Q8_0-GGUF --hf-file deepseek-r1-distill-qwen-14b-abliterated-v2-q8_0.gguf -c 2048
```
|
zera09/segment_mask_token_v2 | zera09 | "2025-02-18T10:44:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"base_model:zera09/custom-longt5-with-ts",
"base_model:finetune:zera09/custom-longt5-with-ts",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-18T10:43:49Z" | ---
library_name: transformers
base_model: zera09/custom-longt5-with-ts
tags:
- generated_from_trainer
model-index:
- name: segment_mask_token_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segment_mask_token_v2
This model is a fine-tuned version of [zera09/custom-longt5-with-ts](https://huggingface.co/zera09/custom-longt5-with-ts) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 72 | 5.8637 |
| 21.3135 | 2.0 | 144 | 1.6856 |
| 1.3759 | 3.0 | 216 | 1.0404 |
| 1.3759 | 4.0 | 288 | 0.6246 |
| 0.6906 | 5.0 | 360 | 0.3509 |
| 0.3575 | 6.0 | 432 | 0.2973 |
| 0.3583 | 7.0 | 504 | 0.1697 |
| 0.3583 | 8.0 | 576 | 0.1578 |
| 0.2914 | 9.0 | 648 | 0.1353 |
| 0.1716 | 10.0 | 720 | 0.1145 |
| 0.1716 | 11.0 | 792 | 0.0990 |
| 0.2055 | 12.0 | 864 | 0.0975 |
| 0.1352 | 13.0 | 936 | 0.0818 |
| 0.116 | 14.0 | 1008 | 0.0789 |
| 0.116 | 15.0 | 1080 | 0.0812 |
| 0.1264 | 16.0 | 1152 | 0.0800 |
| 0.1599 | 17.0 | 1224 | 0.0762 |
| 0.1599 | 18.0 | 1296 | 0.0720 |
| 0.0694 | 19.0 | 1368 | 0.0726 |
| 0.0742 | 20.0 | 1440 | 0.0718 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
lesso06/da6e1f9e-dad2-49a1-9022-605ed058360a | lesso06 | "2025-02-13T04:03:11Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | "2025-02-13T03:50:23Z" | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: da6e1f9e-dad2-49a1-9022-605ed058360a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# da6e1f9e-dad2-49a1-9022-605ed058360a
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000206
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 2.7236 |
| 2.2946 | 0.0188 | 50 | 2.4655 |
| 2.3771 | 0.0377 | 100 | 2.4201 |
| 2.387 | 0.0565 | 150 | 2.4090 |
| 2.4273 | 0.0754 | 200 | 2.3968 |
| 2.3688 | 0.0942 | 250 | 2.3789 |
| 2.3742 | 0.1131 | 300 | 2.3666 |
| 2.2643 | 0.1319 | 350 | 2.3536 |
| 2.3367 | 0.1508 | 400 | 2.3386 |
| 2.3294 | 0.1696 | 450 | 2.3349 |
| 2.4832 | 0.1885 | 500 | 2.3343 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
zelk12/MT3-Gen6-UC-gemma-2-9B | zelk12 | "2025-02-12T11:51:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT3-Gen6-C-gemma-2-ItAMTMaMe02012025163610-9B",
"base_model:merge:zelk12/MT3-Gen6-C-gemma-2-ItAMTMaMe02012025163610-9B",
"base_model:zelk12/MT3-Gen6-U-gemma-2-ItARv0.5-9B",
"base_model:merge:zelk12/MT3-Gen6-U-gemma-2-ItARv0.5-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-12T11:45:39Z" | ---
base_model:
- zelk12/MT3-Gen6-C-gemma-2-ItAMTMaMe02012025163610-9B
- zelk12/MT3-Gen6-U-gemma-2-ItARv0.5-9B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT3-Gen6-C-gemma-2-ItAMTMaMe02012025163610-9B](https://huggingface.co/zelk12/MT3-Gen6-C-gemma-2-ItAMTMaMe02012025163610-9B)
* [zelk12/MT3-Gen6-U-gemma-2-ItARv0.5-9B](https://huggingface.co/zelk12/MT3-Gen6-U-gemma-2-ItARv0.5-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT3-Gen6-U-gemma-2-ItARv0.5-9B
- model: zelk12/MT3-Gen6-C-gemma-2-ItAMTMaMe02012025163610-9B
merge_method: slerp
base_model: zelk12/MT3-Gen6-U-gemma-2-ItARv0.5-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
csukuangfj/sherpa-onnx-zh-wenet-aishell | csukuangfj | "2023-11-16T04:37:27Z" | 0 | 0 | null | [
"onnx",
"region:us"
] | null | "2023-11-15T02:57:01Z" | # Introduction
This model is converted from https://wenet.org.cn/downloads?models=wenet&version=aishell_u2pp_conformer_exp.tar.gz
|
wdika/REC_JointICNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM | wdika | "2024-03-06T10:49:56Z" | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"JointICNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:fastMRIBrainsMulticoil",
"license:apache-2.0",
"region:us"
] | null | "2024-03-05T17:50:02Z" | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- fastMRIBrainsMulticoil
thumbnail: null
tags:
- image-reconstruction
- JointICNet
- ATOMMIC
- pytorch
model-index:
- name: REC_JointICNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
results: []
---
## Model Overview
Joint Deep Model-Based MR Image and Coil Sensitivity Reconstruction Network (JointICNet) for 4x & 8x accelerated MRI Reconstruction on the fastMRIBrainsMulticoil dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_JointICNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM/blob/main/REC_JointICNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the fastMRI Brains dataset to effectively use this model. Check the [fastMRIBrainsMulticoil](https://github.com/wdika/atommic/blob/main/projects/REC/fastMRIBrainsMulticoil/README.md) page for more information.
## Model Architecture
```base
model:
model_name: JointICNet
num_iter: 2
kspace_unet_num_filters: 16
kspace_unet_num_pool_layers: 2
kspace_unet_dropout_probability: 0.0
kspace_unet_padding_size: 11
kspace_unet_normalize: true
imspace_unet_num_filters: 16
imspace_unet_num_pool_layers: 2
imspace_unet_dropout_probability: 0.0
imspace_unet_padding_size: 11
imspace_unet_normalize: true
sens_unet_num_filters: 16
sens_unet_num_pool_layers: 2
sens_unet_dropout_probability: 0.0
sens_unet_padding_size: 11
sens_unet_normalize: true
dimensionality: 2
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
4x: MSE = 0.001774 +/- 0.004331 NMSE = 0.04376 +/- 0.08693 PSNR = 28.57 +/- 5.497 SSIM = 0.8318 +/- 0.1976
8x: MSE = 0.003421 +/- 0.005284 NMSE = 0.08763 +/- 0.1835 PSNR = 25.5 +/- 5.384 SSIM = 0.7719 +/- 0.2019
## Limitations
This model was trained on the fastMRIBrainsMulticoil batch0 dataset using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, Jun Y, Shin H, Hwang D, Mostapha M, Arberet S, Nickel D, Ramzi Z, Ciuciu P, Starck JL, Teuwen J, Karkalousos D, Zhang C, Sriram A, Huang Z, Yakubova N, Lui YW, Knoll F. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Trans Med Imaging. 2021 Sep;40(9):2306-2317. doi: 10.1109/TMI.2021.3075856. Epub 2021 Aug 31. PMID: 33929957; PMCID: PMC8428775. |
Xu-Ouyang/pythia-410m-deduped-int4-step98000-GPTQ-wikitext2 | Xu-Ouyang | "2024-08-21T23:30:33Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-08-21T23:30:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
manelalab/chrono-gpt-v1-20221231 | manelalab | "2025-03-06T16:45:56Z" | 2 | 0 | pytorch | [
"pytorch",
"ChronoGPT",
"chronologically consistent",
"modded-nanogpt",
"hellaswag",
"text-generation",
"en",
"license:mit",
"region:us"
] | text-generation | "2025-03-05T17:18:39Z" | ---
library_name: pytorch
license: mit
language:
- en
tags:
- chronologically consistent
- modded-nanogpt
- hellaswag
pipeline_tag: text-generation
inference: false
---
# ChronoGPT
## Model Description
ChronoGPT is a series **high-performance chronologically consistent large language models (LLMs)** designed to eliminate lookahead bias and training leakage while maintaining good language understanding in time-sensitive applications. The model is pretrained on **diverse, high-quality, open-source, and timestamped text** to maintain chronological consistency.
All models in the series achieve **HellaSwag benchmark scores that surpass those of the GPT-2 124M model with the same parameter count.** This approach preserves the integrity of historical analysis and enables more reliable economic and financial modeling.
- **Developed by:** Songrun He, Linying Lv, Asaf Manela, Jimmy Wu
- **Model type:** Transformer-based autoregressive decoder (Modified modded-NanoGPT architecture)
- **Language(s) (NLP):** English
- **License:** MIT License
## Model Sources
- **Paper:** "Chronologically Consistent Large Language Models" (He, Lv, Manela, Wu, 2025)
## How to Get Started with the Model
The model is compatible with following requirements:
```sh
pip install -r requirements.txt
```
Here is an example code of using the model:
```python
from modeling_chronogpt import ChronoGPT
import tiktoken
import torch
device = 'cuda:0'
max_length = 1792
tokenizer = tiktoken.get_encoding("gpt2")
model = ChronoGPT.from_pretrained("manelalab/chrono-gpt-v1-19991231", trust_remote_code=True).to(device)
text = "Obviously, the time continuum has been disrupted, creating a new temporal event sequence resulting in this alternate reality. -- Dr. Brown, Back to the Future Part II"
inputs = torch.tensor(tokenizer.encode(text))[:max_length].reshape(1,-1).to(device)
logits, emb = model(inputs)
```
## Training Details
### Training Data
- **Pretraining corpus:** Our initial model chrono-gpt-v1-19991231 is pretrained on 21 billion tokens of pre-2000, diverse, high-quality, and open-source text data to ensure no leakage of data afterwards.
- **Incremental updates:** Yearly updates from 2000 to 2024 with an additional 65 billion tokens of timestamped text.
### Training Procedure
- **Architecture:** modded NanoGPT-based model with the Muon optimizer, Skip connections, rotary embeddings and flex attention.
- **Objective:** Autoregressive text generation.
## Evaluation
### Testing Data, Factors & Metrics
- **Language understanding:** Evaluated on **HellaSwag benchmark** tasks.
- **Financial forecasting:** Evaluated using **return prediction task** based on Dow Jones Newswire data.
- **Comparison models:** ChronoGPT was benchmarked against **BERT, FinBERT, StoriesLM-v1-1963, and Llama 3.1**.
### Results
- **HellaSwag Score:** chrono-gpt-v1-19991231 and chrono-gpt-v1-20241231 achieved HellaSwag score of 0.295 and 0.324 respectively, outperforming GPT-2 (0.294).
- **Stock return predictions:** During the sample from 2008-01 to 2023-07, chrono-gpt-v1-realtime achieves a long-short portfolio **Sharpe ratio of 4.50**, outperforming BERT, FinBERT, and StoriesLM-v1-1963, and comparable to **Llama 3.1 8B (4.90)**.
## Citation
```
@article{He2025ChronoBERT,
title={Chronologically Consistent Large Language Models},
author={He, Songrun and Lv, Linying and Manela, Asaf and Wu, Jimmy},
journal={Working Paper},
year={2025}
}
```
## Model Card Authors
- Songrun He (Washington University in St. Louis, [email protected])
- Linying Lv (Washington University in St. Louis, [email protected])
- Asaf Manela (Washington University in St. Louis, [email protected])
- Jimmy Wu (Washington University in St. Louis, [email protected])
|
voidful/bart-distractor-generation-pm | voidful | "2023-03-22T02:11:36Z" | 10 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"distractor",
"generation",
"seq2seq",
"en",
"dataset:race",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | ---
language: en
tags:
- bart
- distractor
- generation
- seq2seq
datasets:
- race
metrics:
- bleu
- rouge
pipeline_tag: text2text-generation
widget:
- text: "When you ' re having a holiday , one of the main questions to ask is which hotel or apartment to choose . However , when it comes to France , you have another special choice : treehouses . In France , treehouses are offered to travelers as a new choice in many places . The price may be a little higher , but you do have a chance to _ your childhood memories . Alain Laurens , one of France ' s top treehouse designers , said , ' Most of the people might have the experience of building a den when they were young . And they like that feeling of freedom when they are children . ' Its fairy - tale style gives travelers a special feeling . It seems as if they are living as a forest king and enjoying the fresh air in the morning . Another kind of treehouse is the ' star cube ' . It gives travelers the chance of looking at the stars shining in the sky when they are going to sleep . Each ' star cube ' not only offers all the comfortable things that a hotel provides for travelers , but also gives them a chance to look for stars by using a telescope . The glass roof allows you to look at the stars from your bed . </s> The passage mainly tells us </s> treehouses in france."
---
# bart-distractor-generation-pm
## Model description
This model is a sequence-to-sequence distractor generator which takes an answer, question and context as an input, and generates a distractor as an output. It is based on a pretrained `bart-base` model.
This model trained with Parallel MLM refer to the [Paper](https://www.aclweb.org/anthology/2020.findings-emnlp.393/).
For details, please see https://github.com/voidful/BDG.
## Intended uses & limitations
The model is trained to generate examinations-style multiple choice distractor. The model performs best with full sentence answers.
#### How to use
The model takes concatenated context, question and answers as an input sequence, and will generate a full distractor sentence as an output sequence. The max sequence length is 1024 tokens. Inputs should be organised into the following format:
```
context </s> question </s> answer
```
The input sequence can then be encoded and passed as the `input_ids` argument in the model's `generate()` method.
#### Limitations and bias
The model is limited to generating distractor in the same style as those found in [RACE](https://www.aclweb.org/anthology/D17-1082/). The generated distractors can potentially be leading or reflect biases that are present in the context. If the context is too short or completely absent, or if the context, question and answer do not match, the generated distractor is likely to be incoherent. |
SloNSteady/cities | SloNSteady | "2024-06-29T08:45:45Z" | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-29T08:45:29Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: cities
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.75
---
# cities
Autogenerated by HuggingPicsπ€πΌοΈ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Hongkong

#### New York city

#### Seoul

#### Tokyo

#### Washington DC
 |
shibajustfor/53619833-64e4-4ed3-8727-538e6b94c211 | shibajustfor | "2025-03-09T23:26:34Z" | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b",
"base_model:adapter:openlm-research/open_llama_3b",
"region:us"
] | null | "2025-03-09T23:26:21Z" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: openlm-research/open_llama_3b
model-index:
- name: shibajustfor/53619833-64e4-4ed3-8727-538e6b94c211
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shibajustfor/53619833-64e4-4ed3-8727-538e6b94c211
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jubayerarnob/falconllm7b | jubayerarnob | "2023-10-29T19:54:13Z" | 0 | 0 | peft | [
"peft",
"falcon",
"arxiv:1910.09700",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:adapter:ybelkada/falcon-7b-sharded-bf16",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2023-10-29T19:50:34Z" | ---
library_name: peft
base_model: ybelkada/falcon-7b-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
Sparsity/atten_nolimit_3330 | Sparsity | "2024-05-12T20:18:58Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-12T20:16:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hnj0022/q-FrozenLake-v1-4x4-noSlippery | hnj0022 | "2025-03-11T09:47:04Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2025-03-11T09:46:28Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="hnj0022/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DannMacias/finetuning-sentiment-model-3000-samples | DannMacias | "2023-06-08T15:48:29Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-06-08T15:29:06Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
RichardErkhov/internlm_-_internlm2-math-base-20b-gguf | RichardErkhov | "2024-06-03T14:07:53Z" | 4 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-06-03T05:07:44Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
internlm2-math-base-20b - GGUF
- Model creator: https://huggingface.co/internlm/
- Original model: https://huggingface.co/internlm/internlm2-math-base-20b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [internlm2-math-base-20b.Q2_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q2_K.gguf) | Q2_K | 7.03GB |
| [internlm2-math-base-20b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.IQ3_XS.gguf) | IQ3_XS | 7.79GB |
| [internlm2-math-base-20b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.IQ3_S.gguf) | IQ3_S | 8.2GB |
| [internlm2-math-base-20b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q3_K_S.gguf) | Q3_K_S | 1.63GB |
| [internlm2-math-base-20b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.IQ3_M.gguf) | IQ3_M | 8.5GB |
| [internlm2-math-base-20b.Q3_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q3_K.gguf) | Q3_K | 9.05GB |
| [internlm2-math-base-20b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q3_K_M.gguf) | Q3_K_M | 9.05GB |
| [internlm2-math-base-20b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q3_K_L.gguf) | Q3_K_L | 9.83GB |
| [internlm2-math-base-20b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.IQ4_XS.gguf) | IQ4_XS | 10.12GB |
| [internlm2-math-base-20b.Q4_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q4_0.gguf) | Q4_0 | 10.55GB |
| [internlm2-math-base-20b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.IQ4_NL.gguf) | IQ4_NL | 10.65GB |
| [internlm2-math-base-20b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q4_K_S.gguf) | Q4_K_S | 10.62GB |
| [internlm2-math-base-20b.Q4_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q4_K.gguf) | Q4_K | 11.16GB |
| [internlm2-math-base-20b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q4_K_M.gguf) | Q4_K_M | 11.16GB |
| [internlm2-math-base-20b.Q4_1.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q4_1.gguf) | Q4_1 | 11.67GB |
| [internlm2-math-base-20b.Q5_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q5_0.gguf) | Q5_0 | 12.79GB |
| [internlm2-math-base-20b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q5_K_S.gguf) | Q5_K_S | 12.79GB |
| [internlm2-math-base-20b.Q5_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q5_K.gguf) | Q5_K | 13.11GB |
| [internlm2-math-base-20b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q5_K_M.gguf) | Q5_K_M | 13.11GB |
| [internlm2-math-base-20b.Q5_1.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q5_1.gguf) | Q5_1 | 13.91GB |
| [internlm2-math-base-20b.Q6_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q6_K.gguf) | Q6_K | 15.18GB |
| [internlm2-math-base-20b.Q8_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-base-20b-gguf/blob/main/internlm2-math-base-20b.Q8_0.gguf) | Q8_0 | 19.66GB |
Original model description:
---
pipeline_tag: text-generation
license: other
language:
- en
- zh
tags:
- math
---
# InternLM-Math
<div align="center">
<img src="https://raw.githubusercontent.com/InternLM/InternLM/main/assets/logo.svg" width="200"/>
<div> </div>
<div align="center">
<b><font size="5">InternLM-Math</font></b>
<sup>
<a href="https://internlm.intern-ai.org.cn/">
<i><font size="4">HOT</font></i>
</a>
</sup>
<div> </div>
</div>
State-of-the-art bilingual open-sourced Math reasoning LLMs.
A **solver**, **prover**, **verifier**, **augmentor**.
[π» Github](https://github.com/InternLM/InternLM-Math) [π€ Demo](https://huggingface.co/spaces/internlm/internlm2-math-7b) [π€ Checkpoints](https://huggingface.co/internlm/internlm2-math-7b) [](https://openxlab.org.cn/models/detail/OpenLMLab/InternLM2-Math-7B) [<img src="https://raw.githubusercontent.com/InternLM/InternLM-Math/main/assets/modelscope_logo.png" width="20px" /> ModelScope](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm2-math-7b/summary)
</div>
# News
- [2024.01.29] We add checkpoints from ModelScope. Tech report is on the way!
- [2024.01.26] We add checkpoints from OpenXLab, which ease Chinese users to download!
# Introduction
- **7B and 20B Chinese and English Math LMs with better than ChatGPT performances.** InternLM2-Math are continued pretrained from InternLM2-Base with ~100B high quality math-related tokens and SFT with ~2M bilingual math supervised data. We apply minhash and exact number match to decontaminate possible test set leakage.
- **Add Lean as a support language for math problem solving and math theorem proving.** We are exploring combining Lean 3 with InternLM-Math for verifiable math reasoning. InternLM-Math can generate Lean codes for simple math reasoning tasks like GSM8K or provide possible proof tactics based on Lean states.
- **Also can be viewed as a reward model, which supports the Outcome/Process/Lean Reward Model.** We supervise InternLM2-Math with various types of reward modeling data, to make InternLM2-Math can also verify chain-of-thought processes. We also add the ability to convert a chain-of-thought process into Lean 3 code.
- **A Math LM Augment Helper** and **Code Interpreter**. InternLM2-Math can help augment math reasoning problems and solve them using the code interpreter which makes you generate synthesis data quicker!

# Models
**InternLM2-Math-Base-7B** and **InternLM2-Math-Base-20B** are pretrained checkpoints. **InternLM2-Math-7B** and **InternLM2-Math-20B** are SFT checkpoints.
| Model |Model Type | Transformers(HF) |OpenXLab| ModelScope | Release Date |
|---|---|---|---|---|---|
| **InternLM2-Math-Base-7B** | Base| [π€internlm/internlm2-math-base-7b](https://huggingface.co/internlm/internlm2-math-base-7b) |[](https://openxlab.org.cn/models/detail/OpenLMLab/InternLM2-Math-Base-7B)| [<img src="https://raw.githubusercontent.com/InternLM/InternLM-Math/main/assets/modelscope_logo.png" width="20px" /> internlm2-math-base-7b](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm2-math-base-7b/summary)| 2024-01-23|
| **InternLM2-Math-Base-20B** | Base| [π€internlm/internlm2-math-base-20b](https://huggingface.co/internlm/internlm2-math-base-20b) |[](https://openxlab.org.cn/models/detail/OpenLMLab/InternLM2-Math-Base-20B)|[<img src="https://raw.githubusercontent.com/InternLM/InternLM-Math/main/assets/modelscope_logo.png" width="20px" /> internlm2-math-base-20b](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm2-math-base-20b/summary)| 2024-01-23|
| **InternLM2-Math-7B** | Chat| [π€internlm/internlm2-math-7b](https://huggingface.co/internlm/internlm2-math-7b) |[](https://openxlab.org.cn/models/detail/OpenLMLab/InternLM2-Math-7B)|[<img src="https://raw.githubusercontent.com/InternLM/InternLM-Math/main/assets/modelscope_logo.png" width="20px" /> internlm2-math-7b](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm2-math-7b/summary)| 2024-01-23|
| **InternLM2-Math-20B** | Chat| [π€internlm/internlm2-math-20b](https://huggingface.co/internlm/internlm2-math-20b) |[](https://openxlab.org.cn/models/detail/OpenLMLab/InternLM2-Math-20B)|[<img src="https://raw.githubusercontent.com/InternLM/InternLM-Math/main/assets/modelscope_logo.png" width="20px" /> internlm2-math-20b](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm2-math-20b/summary)| 2024-01-23|
# Performance
## Pretrain Performance
We evaluate pretrain checkpoints based on greedy decoding with few-shot COT. Details of pretraining will be introduced in the tech report.
| Model | GSM8K | MATH |
|------------------------|---------|--------|
| Llama2-7B | 11.8 | 3.2 |
| Llemma-7B | 36.4 | 18.0 |
| InternLM2-Base-7B | 36.5 | 8.6 |
| **InternLM2-Math-Base-7B** | **49.2** | **21.5** |
| Minerva-8B | 16.2 | 14.1 |
| InternLM2-Base-20B | 54.6 | 13.7 |
| **InternLM2-Math-Base-20B** | **63.7** | **27.3** |
| Llemma-34B | 51.5 | 25.0 |
| Minerva-62B | 52.4 | 27.6 |
| Minerva-540B | 58.8 | 33.6 |
## SFT Peformance
All performance is based on greedy decoding with COT. We notice that the performance of Hungary has a big variance between our different checkpoints, while other performance is very stable. This may be due to the problem amount about Hungary.
| Model | Model Type | GSM8K | MATH | Hungary |
|------------------------|----------------------|--------|--------|---------|
| Qwen-7B-Chat | Genearl | 51.7 | 11.6 | - |
| DeepSeek-7B-Chat | General | 63.0 | 15.8 | 28.5 |
| InternLM2-Chat-7B | General | 70.7 | 23.0 | - |
| ChatGLM3-6B | General | 53.8 | 20.4 | 32 |
| MetaMath-Mistral-7B | Mathematics | 77.7 | 28.2 | 29 |
| MetaMath-Llemma-7B | Mathematics | 69.2 | 30.0 | - |
| **InternLM2-Math-7B** | Mathematics | **78.1** | **34.6** | **55** |
| InternLM2-Chat-20B | General | 79.6 | 31.9 | - |
| MetaMath-Llemma-34B | Mathematics | 75.8 | 34.8 | - |
| **InternLM2-Math-20B** | Mathematics | **82.6** | **37.7** | **66** |
| Qwen-72B | General | 78.9 | 35.2 | 52 |
| DeepSeek-67B | General | 84.1 | 32.6 | 58 |
| ChatGPT (GPT-3.5) | General | 80.8 | 34.1 | 41 |
| GPT4 (First version) | General | 92.0 | 42.5 | 68 |
# Inference
## LMDeploy
We suggest using [LMDeploy](https://github.com/InternLM/LMDeploy)(>=0.2.1) for inference.
```python
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
backend_config = TurbomindEngineConfig(model_name='internlm2-chat-7b', tp=1, cache_max_entry_count=0.3)
chat_template = ChatTemplateConfig(model_name='internlm2-chat-7b', system='', eosys='', meta_instruction='')
pipe = pipeline(model_path='internlm/internlm2-math-base-20b', chat_template_config=chat_template, backend_config=backend_config)
problem = '1+1='
result = pipe([problem], request_output_len=1024, top_k=1)
```
## Huggingface
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-math-base-20b", trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-math-base-20b", trust_remote_code=True, torch_dtype=torch.float16).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "1+1=", history=[], meta_instruction="")
print(response)
```
# Special usages
We list some instructions used in our SFT. You can use them to help you. You can use the other ways to prompt the model, but the following are recommended. InternLM2-Math may combine the following abilities but it is not guaranteed.
Translate proof problem to Lean:

Using Lean 3 to solve GSM8K problem:

Generate problem based on Lean 3 code:

Play 24 point game:

Augment a harder math problem:

| Description | Query |
| --- | --- |
| Solving question via chain-of-thought | {Question} |
| Solving question via Lean 3 | {Question}\nSolve this via Lean 3 |
| Outcome reward model | Given a question and an answer, check is it correct?\nQuestion:{Question}\nAnswer:{COT} |
| Process reward model | Given a question and an answer, check correctness of each step.\nQuestion:{Question}\nAnswer:{COT} |
| Reward model | Given a question and two answers, which one is better? \nQuestion:{Question}\nAnswer 1:{COT}\nAnswer 2:{COT} |
| Convert chain-of-thought to Lean 3 | Convert this answer into Lean3. Question:{Question}\nAnswer:{COT} |
| Convert Lean 3 to chain-of-thought | Convert this lean 3 code into a natural language problem with answers:\n{LEAN Code} |
| Translate question and chain-of-thought answer to a proof statement | Convert this question and answer into a proof format.\nQuestion:{Question}\nAnswer:{COT} |
| Translate proof problem to Lean 3 | Convert this natural langauge statement into a Lean 3 theorem statement:{Theorem} |
| Translate Lean 3 to proof problem | Convert this Lean 3 theorem statement into natural language:{STATEMENT} |
| Suggest a tactic based on Lean state | Given the Lean 3 tactic state, suggest a next tactic:\n{LEAN State} |
| Rephrase Problem | Describe this problem in another way. {Question} |
| Augment Problem | Please augment a new problem based on: {Question} |
| Augment a harder Problem | Increase the complexity of the problem: {Question} |
| Change specific numbers | Change specific numbers: {Question}|
| Introduce fractions or percentages | Introduce fractions or percentages: {Question}|
| Code Interpreter | [lagent](https://github.com/InternLM/InternLM/blob/main/agent/lagent.md) |
| In-context Learning | Question:{Question}\nAnswer:{COT}\n...Question:{Question}\nAnswer:{COT}|
# Fine-tune and others
Please refer to [InternLM](https://github.com/InternLM/InternLM/tree/main).
# Known issues
Our model is still under development and will be upgraded. There are some possible issues of InternLM-Math. If you find performances of some abilities are not great, welcome to open an issue.
- Jump the calculating step.
- Perform badly at Chinese fill-in-the-bank problems and English choice problems due to SFT data composition.
- Tend to generate Code Interpreter when facing Chinese problems due to SFT data composition.
- The reward model mode can be better leveraged with assigned token probabilities.
- Code switch due to SFT data composition.
- Some abilities of Lean can only be adapted to GSM8K-like problems (e.g. Convert chain-of-thought to Lean 3), and performance related to Lean is not guaranteed.
# Citation and Tech Report
To be appended.
|
MHaurel/a2c-PandaReachDense-v2 | MHaurel | "2023-08-21T13:54:48Z" | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-02T09:41:04Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.45 +/- 0.93
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687) |
cakeyu/whisper-dutch | cakeyu | "2023-04-21T12:31:29Z" | 111 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-04-21T12:25:47Z" | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cpu
- Datasets 2.11.0
- Tokenizers 0.13.3
|
mradermacher/MN-12B-LilithFrame-Experiment-3-GGUF | mradermacher | "2025-01-30T17:00:27Z" | 204 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:DoppelReflEx/MN-12B-LilithFrame-Experiment-3",
"base_model:quantized:DoppelReflEx/MN-12B-LilithFrame-Experiment-3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-30T14:50:33Z" | ---
base_model: DoppelReflEx/MN-12B-LilithFrame-Experiment-3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/DoppelReflEx/MN-12B-LilithFrame-Experiment-3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-12B-LilithFrame-Experiment-3-GGUF/resolve/main/MN-12B-LilithFrame-Experiment-3.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-LilithFrame-Experiment-3-GGUF/resolve/main/MN-12B-LilithFrame-Experiment-3.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-LilithFrame-Experiment-3-GGUF/resolve/main/MN-12B-LilithFrame-Experiment-3.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-LilithFrame-Experiment-3-GGUF/resolve/main/MN-12B-LilithFrame-Experiment-3.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-LilithFrame-Experiment-3-GGUF/resolve/main/MN-12B-LilithFrame-Experiment-3.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-LilithFrame-Experiment-3-GGUF/resolve/main/MN-12B-LilithFrame-Experiment-3.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-LilithFrame-Experiment-3-GGUF/resolve/main/MN-12B-LilithFrame-Experiment-3.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-LilithFrame-Experiment-3-GGUF/resolve/main/MN-12B-LilithFrame-Experiment-3.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-LilithFrame-Experiment-3-GGUF/resolve/main/MN-12B-LilithFrame-Experiment-3.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-LilithFrame-Experiment-3-GGUF/resolve/main/MN-12B-LilithFrame-Experiment-3.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-LilithFrame-Experiment-3-GGUF/resolve/main/MN-12B-LilithFrame-Experiment-3.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
hopkins/mbart-finetuned-eng-kor-36 | hopkins | "2023-07-03T01:42:12Z" | 119 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2023-07-03T01:24:37Z" | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-36
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-36
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9933
- Bleu: 6.9791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lesso02/db7fa05d-a82f-43dc-9084-3d41b53e204b | lesso02 | "2025-03-22T20:38:19Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"region:us"
] | null | "2025-03-22T18:16:07Z" | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: db7fa05d-a82f-43dc-9084-3d41b53e204b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 73e4c5faf17079a4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/73e4c5faf17079a4_train_data.json
type:
field_input: texteHtml
field_instruction: ref
field_output: texte
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso02/db7fa05d-a82f-43dc-9084-3d41b53e204b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000202
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/73e4c5faf17079a4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 20
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1bc830f5-4229-4d9e-bbe1-911dc6deeca2
wandb_project: 02a
wandb_run: your_name
wandb_runid: 1bc830f5-4229-4d9e-bbe1-911dc6deeca2
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# db7fa05d-a82f-43dc-9084-3d41b53e204b
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000202
- train_batch_size: 4
- eval_batch_size: 4
- seed: 20
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 0.2030 |
| 0.0126 | 0.1099 | 500 | 0.0108 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Dreamnaught/VergilDMCRVC2 | Dreamnaught | "2023-06-24T10:52:10Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-06-24T08:18:42Z" | ---
license: openrail
---
RVC v2 350 Epochs mangio-crepe trained with DMC 5 sound Assets |
jcook2/RandomAlgoModel | jcook2 | "2025-02-12T12:40:15Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-11T14:07:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso12/0110fd8e-3362-4683-b0d3-82d7191ac1e4 | lesso12 | "2025-02-09T14:46:22Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:unsloth/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-09T13:59:30Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0110fd8e-3362-4683-b0d3-82d7191ac1e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 0110fd8e-3362-4683-b0d3-82d7191ac1e4
This model is a fine-tuned version of [unsloth/OpenHermes-2.5-Mistral-7B](https://huggingface.co/unsloth/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2660
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000212
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.1482 |
| 3.0703 | 0.0033 | 50 | 1.2389 |
| 2.9545 | 0.0065 | 100 | 1.4462 |
| 2.8028 | 0.0098 | 150 | 1.3436 |
| 3.1653 | 0.0130 | 200 | 1.2660 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mehdirafiei/Llama3_columns | mehdirafiei | "2024-04-21T11:02:38Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-21T10:58:30Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jamesgoncha/fuckorestis | jamesgoncha | "2025-03-06T13:04:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-06T13:04:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Calcium-Opus-14B-Elite-1M-GGUF | mradermacher | "2025-01-27T23:48:00Z" | 338 | 1 | transformers | [
"transformers",
"gguf",
"opus",
"14b",
"CoCo",
"reasoning",
"cosine",
"en",
"base_model:prithivMLmods/Calcium-Opus-14B-Elite-1M",
"base_model:quantized:prithivMLmods/Calcium-Opus-14B-Elite-1M",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-27T17:31:20Z" | ---
base_model: prithivMLmods/Calcium-Opus-14B-Elite-1M
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- opus
- 14b
- CoCo
- reasoning
- cosine
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/prithivMLmods/Calcium-Opus-14B-Elite-1M
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Calcium-Opus-14B-Elite-1M-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-14B-Elite-1M-GGUF/resolve/main/Calcium-Opus-14B-Elite-1M.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-14B-Elite-1M-GGUF/resolve/main/Calcium-Opus-14B-Elite-1M.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-14B-Elite-1M-GGUF/resolve/main/Calcium-Opus-14B-Elite-1M.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-14B-Elite-1M-GGUF/resolve/main/Calcium-Opus-14B-Elite-1M.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-14B-Elite-1M-GGUF/resolve/main/Calcium-Opus-14B-Elite-1M.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-14B-Elite-1M-GGUF/resolve/main/Calcium-Opus-14B-Elite-1M.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-14B-Elite-1M-GGUF/resolve/main/Calcium-Opus-14B-Elite-1M.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-14B-Elite-1M-GGUF/resolve/main/Calcium-Opus-14B-Elite-1M.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-14B-Elite-1M-GGUF/resolve/main/Calcium-Opus-14B-Elite-1M.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-14B-Elite-1M-GGUF/resolve/main/Calcium-Opus-14B-Elite-1M.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-14B-Elite-1M-GGUF/resolve/main/Calcium-Opus-14B-Elite-1M.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
gbemilekeonilude/SmolLM-360M-Python | gbemilekeonilude | "2025-03-15T01:12:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolLM-360M",
"base_model:finetune:HuggingFaceTB/SmolLM-360M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-15T01:12:07Z" | ---
library_name: transformers
license: apache-2.0
base_model: HuggingFaceTB/SmolLM-360M
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: SmolLM-360M-Python
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SmolLM-360M-Python
This model is a fine-tuned version of [HuggingFaceTB/SmolLM-360M](https://huggingface.co/HuggingFaceTB/SmolLM-360M) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 2.20.0
- Tokenizers 0.21.0
|
ramixpe/r128_a128_2ep | ramixpe | "2024-04-15T12:24:56Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-13b-chat-hf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-04-15T12:24:49Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: meta-llama/Llama-2-13b-chat-hf
---
# Uploaded model
- **Developed by:** ramixpe
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Llama-2-13b-chat-hf
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
akxier/perros_gatos | akxier | "2024-04-24T18:02:51Z" | 164 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-04-24T17:23:18Z" | ---
license: cc-by-nc-4.0
---
tags:
- vision
- image-classification
datasets:
- omarques/autotrain-data-dogs-and-cats |
biustnaspust/puszek21 | biustnaspust | "2025-01-22T16:14:49Z" | 47 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-22T16:09:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
oneandahalfcats/twoandahalfcats | oneandahalfcats | "2024-04-12T17:39:00Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-12T16:32:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Blablablab/neurobiber | Blablablab | "2025-02-20T20:47:11Z" | 0 | 0 | null | [
"safetensors",
"roberta",
"region:us"
] | null | "2025-02-19T20:50:51Z" | # Neurobiber: Fast and Interpretable Stylistic Feature Extraction
**Neurobiber** is a transformer-based model that quickly predicts **96 interpretable stylistic features** in text. These features are inspired by Biberβs multidimensional framework of linguistic style, capturing everything from **pronouns** and **passives** to **modal verbs** and **discourse devices**. By combining a robust linguistically informed feature set with the speed of neural inference, NeuroBiber enables large-scale stylistic analyses that were previously infeasible.
## Why Neurobiber?
Extracting Biber-style features typically involves running a full parser or specialized tagger, which can be computationally expensive for large datasets or real-time applications. NeuroBiber overcomes these challenges by:
- **Operating up to 56x faster** than parsing-based approaches.
- Retaining the **interpretability** of classical Biber-like feature definitions.
- Delivering **high accuracy** on diverse text genres (e.g., social media, news, literary works).
- Allowing seamless integration with **modern deep learning** pipelines via Hugging Face.
By bridging detailed linguistic insights and industrial-scale performance, Neurobiber supports tasks in register analysis, style transfer, and more.
## Example Script
Below is an **example** showing how to load Neurobiber from Hugging Face, process single or multiple texts, and obtain a 96-dimensional binary vector for each input.
```python
import torch
import numpy as np
from transformers import AutoTokenizer, AutoModelForSequenceClassification
MODEL_NAME = "Blablablab/neurobiber"
CHUNK_SIZE = 512 # Neurobiber was trained with max_length=512
# List of the 96 features that Neurobiber can predict
BIBER_FEATURES = [
"BIN_QUAN","BIN_QUPR","BIN_AMP","BIN_PASS","BIN_XX0","BIN_JJ",
"BIN_BEMA","BIN_CAUS","BIN_CONC","BIN_COND","BIN_CONJ","BIN_CONT",
"BIN_DPAR","BIN_DWNT","BIN_EX","BIN_FPP1","BIN_GER","BIN_RB",
"BIN_PIN","BIN_INPR","BIN_TO","BIN_NEMD","BIN_OSUB","BIN_PASTP",
"BIN_VBD","BIN_PHC","BIN_PIRE","BIN_PLACE","BIN_POMD","BIN_PRMD",
"BIN_WZPRES","BIN_VPRT","BIN_PRIV","BIN_PIT","BIN_PUBV","BIN_SPP2",
"BIN_SMP","BIN_SERE","BIN_STPR","BIN_SUAV","BIN_SYNE","BIN_TPP3",
"BIN_TIME","BIN_NOMZ","BIN_BYPA","BIN_PRED","BIN_TOBJ","BIN_TSUB",
"BIN_THVC","BIN_NN","BIN_DEMP","BIN_DEMO","BIN_WHQU","BIN_EMPH",
"BIN_HDG","BIN_WZPAST","BIN_THAC","BIN_PEAS","BIN_ANDC","BIN_PRESP",
"BIN_PROD","BIN_SPAU","BIN_SPIN","BIN_THATD","BIN_WHOBJ","BIN_WHSUB",
"BIN_WHCL","BIN_ART","BIN_AUXB","BIN_CAP","BIN_SCONJ","BIN_CCONJ",
"BIN_DET","BIN_EMOJ","BIN_EMOT","BIN_EXCL","BIN_HASH","BIN_INF",
"BIN_UH","BIN_NUM","BIN_LAUGH","BIN_PRP","BIN_PREP","BIN_NNP",
"BIN_QUES","BIN_QUOT","BIN_AT","BIN_SBJP","BIN_URL","BIN_WH",
"BIN_INDA","BIN_ACCU","BIN_PGAS","BIN_CMADJ","BIN_SPADJ","BIN_X"
]
def load_model_and_tokenizer():
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME).to("cuda")
model.eval()
return model, tokenizer
def chunk_text(text, chunk_size=CHUNK_SIZE):
tokens = text.strip().split()
if not tokens:
return []
return [" ".join(tokens[i:i + chunk_size]) for i in range(0, len(tokens), chunk_size)]
def get_predictions_chunked_batch(model, tokenizer, texts, chunk_size=CHUNK_SIZE, subbatch_size=32):
chunked_texts = []
chunk_indices = []
for idx, text in enumerate(texts):
start = len(chunked_texts)
text_chunks = chunk_text(text, chunk_size)
chunked_texts.extend(text_chunks)
chunk_indices.append({
'original_idx': idx,
'chunk_range': (start, start + len(text_chunks))
})
# If there are no chunks (empty inputs), return zeros
if not chunked_texts:
return np.zeros((len(texts), model.config.num_labels))
all_chunk_preds = []
for i in range(0, len(chunked_texts), subbatch_size):
batch_chunks = chunked_texts[i : i + subbatch_size]
encodings = tokenizer(
batch_chunks,
return_tensors='pt',
padding=True,
truncation=True,
max_length=chunk_size
).to("cuda")
with torch.no_grad(), torch.amp.autocast("cuda"):
outputs = model(**encodings)
probs = torch.sigmoid(outputs.logits)
all_chunk_preds.append(probs.cpu())
all_chunk_preds = torch.cat(all_chunk_preds, dim=0) if all_chunk_preds else torch.empty(0)
predictions = [None] * len(texts)
for info in chunk_indices:
start, end = info['chunk_range']
if start == end:
# No tokens => no features
pred = torch.zeros(model.config.num_labels)
else:
# Take max across chunks for each feature
chunk_preds = all_chunk_preds[start:end]
pred, _ = torch.max(chunk_preds, dim=0)
predictions[info['original_idx']] = (pred > 0.5).int().numpy()
return np.array(predictions)
def predict_batch(model, tokenizer, texts, chunk_size=CHUNK_SIZE, subbatch_size=32):
return get_predictions_chunked_batch(model, tokenizer, texts, chunk_size, subbatch_size)
def predict_text(model, tokenizer, text, chunk_size=CHUNK_SIZE, subbatch_size=32):
batch_preds = predict_batch(model, tokenizer, [text], chunk_size, subbatch_size)
return batch_preds[0]
```
## Single-Text Usage
``` python
model, tokenizer = load_model_and_tokenizer()
sample_text = "This is a sample text demonstrating certain stylistic features."
predictions = predict_text(model, tokenizer, sample_text)
print("Binary feature vector:", predictions)
# For example: [0, 1, 0, 1, ... 1, 0] (96-length)
```
## Batch Usage
``` python
docs = [
"First text goes here.",
"Second text, slightly different style."
]
model, tokenizer = load_model_and_tokenizer()
preds = predict_batch(model, tokenizer, docs)
print(preds.shape) # (2, 96)
```
## How It Works
Neurobiber is fine-tuned RoBERTa. Given a text:
1. The text is split into **chunks** (up to 512 tokens each).
2. Each chunk is fed through the model to produce **96 logistic outputs** (one per feature).
3. The feature probabilities are aggregated across chunks so that each feature is marked as `1` if it appears in at least one chunk.
Each row in preds is a 96-element array corresponding to the feature order in BIBER_FEATURES.
Interpreting Outputs
- Each element in the vector is a binary label (0 or 1), indicating the modelβs detection of a specific linguistic feature (e.g., BIN_VBD for past tense verbs).
- For long texts, we chunk them into segments of length 512 tokens. If a feature appears in any chunk, you get a 1 for that feature.
|
lesso/81ee6aa8-1abc-4905-8446-40b88b66ce39 | lesso | "2024-12-16T19:20:37Z" | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"region:us"
] | null | "2024-12-16T19:13:09Z" | ---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 81ee6aa8-1abc-4905-8446-40b88b66ce39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: false
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e8ca0ac66aa11e96_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e8ca0ac66aa11e96_train_data.json
type:
field_instruction: Hausa
field_output: English
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: true
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso/81ee6aa8-1abc-4905-8446-40b88b66ce39
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 1
mlflow_experiment_name: /tmp/e8ca0ac66aa11e96_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 81ee6aa8-1abc-4905-8446-40b88b66ce39
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 81ee6aa8-1abc-4905-8446-40b88b66ce39
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 81ee6aa8-1abc-4905-8446-40b88b66ce39
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.0667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.9802 | 0.0003 | 1 | 11.5776 |
| 10.434 | 0.0009 | 3 | 11.5776 |
| 12.2132 | 0.0018 | 6 | 11.5231 |
| 9.0586 | 0.0027 | 9 | 11.0667 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Sakalti/LunarPass-1 | Sakalti | "2025-01-24T10:23:21Z" | 18 | 1 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"mergekit",
"merge",
"conversational",
"custom_code",
"base_model:Sakalti/Lunar-4B",
"base_model:finetune:Sakalti/Lunar-4B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-24T10:20:41Z" | ---
base_model:
- Sakalti/Lunar-4B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Sakalti/Lunar-4B](https://huggingface.co/Sakalti/Lunar-4B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Sakalti/Lunar-4B
layer_range: [0, 28]
- sources:
- model: Sakalti/Lunar-4B
layer_range: [5, 28]
- sources:
- model: Sakalti/Lunar-4B
layer_range: [28, 32]
merge_method: passthrough
dtype: float16
```
|
jiatsol/First_llm | jiatsol | "2025-03-08T06:14:23Z" | 0 | 0 | null | [
"safetensors",
"llama",
"trl",
"sft",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-08T01:23:03Z" | ---
license: apache-2.0
tags:
- trl
- sft
---
|
mradermacher/omed-llama3-8b-i1-GGUF | mradermacher | "2024-09-08T20:43:03Z" | 11 | 1 | transformers | [
"transformers",
"gguf",
"ko",
"en",
"base_model:hyemijo/omed-llama3-8b",
"base_model:quantized:hyemijo/omed-llama3-8b",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-08-18T03:01:38Z" | ---
base_model: hyemijo/omed-llama3-8b
language:
- ko
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/hyemijo/omed-llama3-8b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/omed-llama3-8b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/omed-llama3-8b-i1-GGUF/resolve/main/omed-llama3-8b.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Organika/sdxl-detector | Organika | "2024-09-06T12:17:24Z" | 13,181 | 34 | transformers | [
"transformers",
"onnx",
"safetensors",
"swin",
"image-classification",
"autotrain",
"dataset:Colby/autotrain-data-sdxl-detection",
"license:cc-by-nc-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-31T04:54:35Z" | ---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- Colby/autotrain-data-sdxl-detection
license: cc-by-nc-3.0
---
# SDXL Detector
This model was created by fine-tuning the [umm-maybe AI art detector](https://huggingface.co/umm-maybe/AI-image-detector) on a dataset of Wikimedia-SDXL image pairs, where the SDXL image is generated using a prompt based upon a BLIP-generated caption describing the Wikimedia image.
This model demonstrates greatly improved performance over the umm-maybe detector on images generated by more recent diffusion models as well as non-artistic imagery (given the broader range of subjects depicted in the random sample drawn from Wikimedia).
However, its performance may be lower for images generated using models other than SDXL. In particular, this model underperforms the original detector for images generated using older models (such as VQGAN+CLIP).
The data used for this fine-tune is either synthetic (generated by SDXL) and therefore non-copyrightable, or downloaded from Wikimedia and therefore meeting their definition of "free data" (see https://commons.wikimedia.org/wiki/Commons:Licensing for details). However, the original umm-maybe AI art detector was trained on data scraped from image links in Reddit posts, some of which may be copyrighted. Therefore this model as well as its predecessor should be considered appropriate for non-commercial (i.e. personal or educational) fair uses only.
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.08717025071382523
f1: 0.9732620320855615
precision: 0.994535519125683
recall: 0.9528795811518325
auc: 0.9980461893059392
accuracy: 0.9812734082397003 |
Moriacrafter/Qwen1.5-0.5B-8bit_DepressionDetection | Moriacrafter | "2024-06-30T20:22:42Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-30T20:22:11Z" | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B-GGUF | mradermacher | "2025-03-03T09:53:39Z" | 293 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:DavidAU/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B",
"base_model:quantized:DavidAU/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-26T12:06:55Z" | ---
base_model: DavidAU/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B-GGUF/resolve/main/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B.Q2_K.gguf) | Q2_K | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B-GGUF/resolve/main/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B.Q3_K_S.gguf) | Q3_K_S | 8.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B-GGUF/resolve/main/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B.Q3_K_M.gguf) | Q3_K_M | 9.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B-GGUF/resolve/main/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B.Q3_K_L.gguf) | Q3_K_L | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B-GGUF/resolve/main/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B.IQ4_XS.gguf) | IQ4_XS | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B-GGUF/resolve/main/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B.Q4_K_S.gguf) | Q4_K_S | 10.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B-GGUF/resolve/main/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B.Q4_K_M.gguf) | Q4_K_M | 11.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B-GGUF/resolve/main/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B.Q5_K_S.gguf) | Q5_K_S | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B-GGUF/resolve/main/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B.Q5_K_M.gguf) | Q5_K_M | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B-GGUF/resolve/main/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B.Q6_K.gguf) | Q6_K | 15.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B-GGUF/resolve/main/L3-SMB-Grand-STORY-F32-Ultra-FORESHADOW-Monster-18.5B.Q8_0.gguf) | Q8_0 | 20.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
JCKipkemboi/bert_fine_tuned | JCKipkemboi | "2025-03-22T16:46:43Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-22T16:22:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
stojchet/ndpo6 | stojchet | "2024-07-16T11:55:19Z" | 101 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"dataset:generator",
"base_model:deepseek-ai/deepseek-coder-1.3b-base",
"base_model:finetune:deepseek-ai/deepseek-coder-1.3b-base",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-16T11:27:21Z" | ---
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-base
tags:
- trl
- dpo
- generated_from_trainer
datasets:
- generator
model-index:
- name: ndpo6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/stojchets/huggingface/runs/ndpo6)
# ndpo6
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.43.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
sail-rvc/TikTokTTS4500 | sail-rvc | "2023-07-14T07:33:08Z" | 2 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:32:55Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# TikTokTTS4500
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:33:07
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
Bijibapakmu/aimyon | Bijibapakmu | "2023-05-12T16:54:13Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-05-12T16:47:26Z" | ---
license: creativeml-openrail-m
---
|
sezer/model | sezer | "2024-11-17T19:04:58Z" | 9 | 0 | transformers | [
"transformers",
"pytorch",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-17T18:35:30Z" | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sezer
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ubiqtuitin/deepseek_r1_8b_medical_rk | ubiqtuitin | "2025-02-17T16:59:19Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-17T16:46:42Z" | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ubiqtuitin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
owanr/SChem5Labels-google-t5-v1_1-large-intra_model-dataset-frequency-human_annots_str | owanr | "2023-11-08T02:33:25Z" | 0 | 0 | null | [
"generated_from_trainer",
"base_model:google/t5-v1_1-large",
"base_model:finetune:google/t5-v1_1-large",
"license:apache-2.0",
"region:us"
] | null | "2023-11-08T02:33:23Z" | ---
license: apache-2.0
base_model: google/t5-v1_1-large
tags:
- generated_from_trainer
model-index:
- name: SChem5Labels-google-t5-v1_1-large-intra_model-dataset-frequency-human_annots_str
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SChem5Labels-google-t5-v1_1-large-intra_model-dataset-frequency-human_annots_str
This model is a fine-tuned version of [google/t5-v1_1-large](https://huggingface.co/google/t5-v1_1-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 20.6291 | 1.0 | 25 | 25.1686 |
| 19.5807 | 2.0 | 50 | 23.6946 |
| 18.3885 | 3.0 | 75 | 16.4506 |
| 15.6621 | 4.0 | 100 | 10.3654 |
| 14.1536 | 5.0 | 125 | 9.6059 |
| 11.1395 | 6.0 | 150 | 9.4016 |
| 10.234 | 7.0 | 175 | 9.1651 |
| 8.6134 | 8.0 | 200 | 8.8517 |
| 8.4527 | 9.0 | 225 | 8.7267 |
| 8.1685 | 10.0 | 250 | 8.6661 |
| 8.1419 | 11.0 | 275 | 8.5700 |
| 8.1851 | 12.0 | 300 | 8.4129 |
| 7.9855 | 13.0 | 325 | 8.1392 |
| 7.5843 | 14.0 | 350 | 7.8676 |
| 7.4604 | 15.0 | 375 | 7.6409 |
| 7.2433 | 16.0 | 400 | 7.4793 |
| 7.1782 | 17.0 | 425 | 7.3690 |
| 6.9756 | 18.0 | 450 | 7.2904 |
| 6.8602 | 19.0 | 475 | 7.2253 |
| 6.9456 | 20.0 | 500 | 7.1670 |
| 6.5134 | 21.0 | 525 | 0.6126 |
| 0.8013 | 22.0 | 550 | 0.5889 |
| 0.6372 | 23.0 | 575 | 0.5655 |
| 0.6054 | 24.0 | 600 | 0.5584 |
| 0.6018 | 25.0 | 625 | 0.5609 |
| 0.5884 | 26.0 | 650 | 0.5572 |
| 0.5887 | 27.0 | 675 | 0.5585 |
| 0.5919 | 28.0 | 700 | 0.5575 |
| 0.5908 | 29.0 | 725 | 0.5570 |
| 0.5859 | 30.0 | 750 | 0.5510 |
| 0.6028 | 31.0 | 775 | 0.5558 |
| 0.594 | 32.0 | 800 | 0.5491 |
| 0.5834 | 33.0 | 825 | 0.5556 |
| 0.5849 | 34.0 | 850 | 0.5515 |
| 0.5791 | 35.0 | 875 | 0.5519 |
| 0.5863 | 36.0 | 900 | 0.5553 |
| 0.5897 | 37.0 | 925 | 0.5486 |
| 0.582 | 38.0 | 950 | 0.5555 |
| 0.579 | 39.0 | 975 | 0.5519 |
| 0.5816 | 40.0 | 1000 | 0.5530 |
| 0.5822 | 41.0 | 1025 | 0.5483 |
| 0.5838 | 42.0 | 1050 | 0.5548 |
| 0.5767 | 43.0 | 1075 | 0.5479 |
| 0.5757 | 44.0 | 1100 | 0.5498 |
| 0.5911 | 45.0 | 1125 | 0.5515 |
| 0.5665 | 46.0 | 1150 | 0.5556 |
| 0.5773 | 47.0 | 1175 | 0.5518 |
| 0.5727 | 48.0 | 1200 | 0.5499 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
tuantmdev/7ab318e0-a3cf-45d9-bff6-d386340a57e7 | tuantmdev | "2025-02-10T17:57:09Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:adapter:EleutherAI/pythia-160m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-10T15:57:56Z" | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7ab318e0-a3cf-45d9-bff6-d386340a57e7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fbcc85776193c7ad_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fbcc85776193c7ad_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuantmdev/7ab318e0-a3cf-45d9-bff6-d386340a57e7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 2e-05
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 40
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/fbcc85776193c7ad_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_strategy: best
saves_per_epoch: 5
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 861c0084-b845-4e35-9b2f-ffbd4ba5fae8
wandb_project: Gradients-On-Demand
wandb_run: unknown
wandb_runid: 861c0084-b845-4e35-9b2f-ffbd4ba5fae8
warmup_steps: 80
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 7ab318e0-a3cf-45d9-bff6-d386340a57e7
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 80
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.7746 |
| 43.8063 | 0.0050 | 40 | 2.7672 |
| 43.9191 | 0.0100 | 80 | 2.7285 |
| 43.3074 | 0.0150 | 120 | 2.6594 |
| 41.7411 | 0.0200 | 160 | 2.6218 |
| 41.2435 | 0.0250 | 200 | 2.6158 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LarryAIDraw/kamuro_masumi | LarryAIDraw | "2024-02-02T13:33:20Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-02-02T13:29:27Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/102416/kamuro-masumi-or-or-classroom-of-the-elite |
yeye776/autotrain-intent-classification-5categories-bert-kor-base-90853144392 | yeye776 | "2023-09-22T09:20:56Z" | 111 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:yeye776/autotrain-data-intent-classification-5categories-bert-kor-base",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-22T09:20:03Z" | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- yeye776/autotrain-data-intent-classification-5categories-bert-kor-base
co2_eq_emissions:
emissions: 0.03180363801413368
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 90853144392
- CO2 Emissions (in grams): 0.0318
## Validation Metrics
- Loss: 0.078
- Accuracy: 0.963
- Macro F1: 0.949
- Micro F1: 0.963
- Weighted F1: 0.964
- Macro Precision: 0.950
- Micro Precision: 0.963
- Weighted Precision: 0.972
- Macro Recall: 0.960
- Micro Recall: 0.963
- Weighted Recall: 0.963
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/yeye776/autotrain-intent-classification-5categories-bert-kor-base-90853144392
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("yeye776/autotrain-intent-classification-5categories-bert-kor-base-90853144392", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("yeye776/autotrain-intent-classification-5categories-bert-kor-base-90853144392", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
m-aliabbas1/Reinforce-cartpol1 | m-aliabbas1 | "2023-07-25T03:10:15Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-25T03:10:05Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpol1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Eagle51/DeepSeek-R1-Distill-Qwen-Tobacco-1.5B | Eagle51 | "2025-02-27T04:10:26Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-27T04:08:59Z" | ---
base_model: unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Eagle51
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rbai86/swiss-7b-core-q8_0-GUFF | rbai86 | "2023-11-28T12:56:44Z" | 0 | 1 | transformers | [
"transformers",
"mistral",
"finetune",
"augmentation",
"english",
"italian",
"german",
"text-generation",
"en",
"de",
"it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-28T08:58:42Z" | ---
license: apache-2.0
language:
- en
- de
- it
library_name: transformers
pipeline_tag: text-generation
tags:
- mistral
- finetune
- augmentation
- english
- italian
- german
- transformers
---

## Swiss 7B Core (EN, DE, IT)
A Swiss 7B Core LLM model. It bases on mistral LLM model. Supports currently german and italian language. |
EmirhanExecute/CartPole-try2 | EmirhanExecute | "2023-08-31T21:26:41Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-31T21:26:33Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-try2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rwitz2/grindin | rwitz2 | "2023-12-12T21:39:56Z" | 2 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2023-12-12T18:12:56Z" | ---
library_name: peft
base_model: AIDC-ai-business_Marcoroni-7B-v3
license: cc-by-nc-4.0
---
marcoroni + argilla/distilabel-math-preference-dpo for 1 epoch
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2 |
speaches-ai/piper-en_US-ryan-low | speaches-ai | "2025-03-20T04:53:06Z" | 0 | 0 | null | [
"onnx",
"speaches",
"piper",
"text-to-speech",
"en",
"region:us"
] | text-to-speech | "2025-03-20T04:53:03Z" | ---
language: en
pipeline_tag: text-to-speech
tags:
- speaches
- piper
library: onnx
---
Run this model using [speaches](https://github.com/speaches-ai/speaches) |
cimol/51d7c703-1b14-4d1f-9f35-7304034c10bf | cimol | "2025-02-18T23:49:40Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-18T23:30:12Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 51d7c703-1b14-4d1f-9f35-7304034c10bf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B-Instruct
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 47f47288c3bcf0de_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/47f47288c3bcf0de_train_data.json
type:
field_input: chosen_model
field_instruction: language
field_output: rejected
field_system: rejected_model
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 3
eval_batch_size: 8
eval_max_new_tokens: 128
eval_steps: 150
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: cimol/51d7c703-1b14-4d1f-9f35-7304034c10bf
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 3e-5
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 15
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
lr_scheduler_warmup_steps: 50
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 1500
micro_batch_size: 8
mlflow_experiment_name: /tmp/47f47288c3bcf0de_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 15
optim_args:
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-8
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: false
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: null
seed: 17333
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
total_train_batch_size: 16
train_batch_size: 8
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cb8829bf-0e4c-4904-995a-3e14b40486a4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cb8829bf-0e4c-4904-995a-3e14b40486a4
warmup_steps: 50
weight_decay: 0.1
xformers_attention: null
```
</details><br>
# 51d7c703-1b14-4d1f-9f35-7304034c10bf
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 17333
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-8
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0081 | 1 | 1.6197 |
| 1.0417 | 1.2097 | 150 | 1.1342 |
| 0.8028 | 2.4194 | 300 | 1.1495 |
| 0.7639 | 3.6290 | 450 | 1.1280 |
| 0.7349 | 4.8387 | 600 | 1.1547 |
| 0.4962 | 6.0484 | 750 | 1.3302 |
| 0.2328 | 7.2581 | 900 | 1.4482 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subsets and Splits