modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 18:27:59
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 520
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 18:27:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ProductGuySensei/imagesofbro | ProductGuySensei | 2025-05-26T09:03:21Z | 13 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-20T15:10:12Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: cover
---
# Imagesofbro
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `cover` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "cover",
"lora_weights": "https://huggingface.co/ProductGuySensei/imagesofbro/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ProductGuySensei/imagesofbro', weight_name='lora.safetensors')
image = pipeline('cover').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ProductGuySensei/imagesofbro/discussions) to add images that show off what you’ve made with this LoRA.
|
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_HotpotQa_e1 | ahmedelgebaly | 2025-05-26T09:02:42Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-24T19:47:49Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_HotpotQa_e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B # same model you originally used
# Load your previously fine-tuned model as a PEFT adapter
peft_model: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/HotpotQA_Alpaca
type: alpaca
split: train
test_datasets:
- path: ahmedelgebaly/HotpotQA_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_HotpotQa_e1
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2_SciQ_HotpotQa_e1
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_HotpotQa_e1
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 1
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_HotpotQa_e1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5019 | 0.0005 | 1 | 1.6930 |
| 0.6246 | 0.2501 | 486 | 0.7879 |
| 0.6935 | 0.5001 | 972 | 0.7512 |
| 0.5706 | 0.7502 | 1458 | 0.7263 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bigband/EnchantingDumuzi | bigband | 2025-05-26T09:02:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T08:53:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_HotpotQa_e2 | ahmedelgebaly | 2025-05-26T09:01:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-24T20:16:35Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_HotpotQa_e2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B # same model you originally used
# Load your previously fine-tuned model as a PEFT adapter
peft_model: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/HotpotQA_Alpaca
type: alpaca
split: train
test_datasets:
- path: ahmedelgebaly/HotpotQA_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_HotpotQa_e2
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2_SciQ_HotpotQa_e2
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_HotpotQa_e2
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 2
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_HotpotQa_e2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5019 | 0.0005 | 1 | 1.6930 |
| 0.6274 | 0.2501 | 486 | 0.7914 |
| 0.6962 | 0.5001 | 972 | 0.7567 |
| 0.5719 | 0.7502 | 1458 | 0.7311 |
| 0.6021 | 1.0003 | 1944 | 0.7159 |
| 0.5002 | 1.2483 | 2430 | 0.7223 |
| 0.5363 | 1.4983 | 2916 | 0.7147 |
| 0.5215 | 1.7484 | 3402 | 0.7109 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
aejion/AccVideo-WanX-T2V-14B | aejion | 2025-05-26T09:01:03Z | 0 | 3 | diffusers | [
"diffusers",
"safetensors",
"t2v",
"arxiv:2503.19462",
"region:us"
]
| null | 2025-05-26T03:12:52Z | # AccVideo: Accelerating Video Diffusion Model with Synthetic Dataset
This repository is the official PyTorch implementation of [AccVideo](https://arxiv.org/abs/2503.19462). AccVideo is a novel efficient distillation method to accelerate video diffusion models with synthetic datset. Our method is 8.5x faster than HunyuanVideo.
[](https://arxiv.org/abs/2503.19462)
[](https://aejion.github.io/accvideo/)
[](https://huggingface.co/aejion/AccVideo)
## 🔥🔥🔥 News
* May 26, 2025: We release the inference code and [model weights](https://huggingface.co/aejion/AccVideo-WanX-T2V-14B) of AccVideo based on WanXT2V-14B.
* Mar 31, 2025: [ComfyUI-Kijai (FP8 Inference)](https://huggingface.co/Kijai/HunyuanVideo_comfy/blob/main/accvideo-t2v-5-steps_fp8_e4m3fn.safetensors): ComfyUI-Integration by [Kijai](https://huggingface.co/Kijai)
* Mar 26, 2025: We release the inference code and [model weights](https://huggingface.co/aejion/AccVideo) of AccVideo based on HunyuanT2V.
## 🎥 Demo (Based on HunyuanT2V)
https://github.com/user-attachments/assets/59f3c5db-d585-4773-8d92-366c1eb040f0
## 🎥 Demo (Based on WanXT2V-14B)
## 📑 Open-source Plan
- [x] Inference
- [x] Checkpoints
- [ ] Multi-GPU Inference
- [ ] Synthetic Video Dataset, SynVid
- [ ] Training
## 🔧 Installation
The code is tested on Python 3.10.0, CUDA 11.8 and A100.
```
conda create -n accvideo python==3.10.0
conda activate accvideo
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
pip install flash-attn==2.7.3 --no-build-isolation
pip install "huggingface_hub[cli]"
```
## 🤗 Checkpoints
To download the checkpoints (based on HunyuanT2V), use the following command:
```bash
# Download the model weight
huggingface-cli download aejion/AccVideo --local-dir ./ckpts
```
To download the checkpoints (based on WanX-T2V-14B), use the following command:
```bash
# Download the model weight
huggingface-cli download aejion/AccVideo-WanX-T2V-14B --local-dir ./wanx_t2v_ckpts
```
## 🚀 Inference
We recommend using a GPU with 80GB of memory. We use AccVideo to distill Hunyuan and WanX.
### Inference for HunyuanT2V
To run the inference, use the following command:
```bash
export MODEL_BASE=./ckpts
python sample_t2v.py \
--height 544 \
--width 960 \
--num_frames 93 \
--num_inference_steps 5 \
--guidance_scale 1 \
--embedded_cfg_scale 6 \
--flow_shift 7 \
--flow-reverse \
--prompt_file ./assets/prompt.txt \
--seed 1024 \
--output_path ./results/accvideo-544p \
--model_path ./ckpts \
--dit-weight ./ckpts/accvideo-t2v-5-steps/diffusion_pytorch_model.pt
```
The following table shows the comparisons on inference time using a single A100 GPU:
| Model | Setting(height/width/frame) | Inference Time(s) |
|:------------:|:---------------------------:|:-----------------:|
| HunyuanVideo | 720px1280px129f | 3234 |
| Ours | 720px1280px129f | 380(8.5x faster) |
| HunyuanVideo | 544px960px93f | 704 |
| Ours | 544px960px93f | 91(7.7x faster) |
### Inference for WanXT2V
To run the inference, use the following command:
```bash
python sample_wanx_t2v.py \
--task t2v-14B \
--size 832*480 \
--ckpt_dir ./wanx_t2v_ckpts \
--sample_solver 'unipc' \
--save_dir ./results/accvideo_wanx_14B \
--sample_steps 10
```
The following table shows the comparisons on inference time using a single A100 GPU:
| Model | Setting(height/width/frame) | Inference Time(s) |
|:-----:|:---------------------------:|:-----------------:|
| Wanx | 480px832px81f | 932 |
| Ours | 480px832px81f | 97(9.6x faster) |
## 🔗 BibTeX
If you find [AccVideo](https://arxiv.org/abs/2503.19462) useful for your research and applications, please cite using this BibTeX:
```BibTeX
@article{zhang2025accvideo,
title={AccVideo: Accelerating Video Diffusion Model with Synthetic Dataset},
author={Zhang, Haiyu and Chen, Xinyuan and Wang, Yaohui and Liu, Xihui and Wang, Yunhong and Qiao, Yu},
journal={arXiv preprint arXiv:2503.19462},
year={2025}
}
```
## Acknowledgements
The code is built upon [FastVideo](https://github.com/hao-ai-lab/FastVideo) and [HunyuanVideo](https://github.com/Tencent/HunyuanVideo), we thank all the contributors for open-sourcing.
|
MAAT-EL-DUAT/THERE-ARE-THOSE-WHO-BELONG-TO-LUCIFER | MAAT-EL-DUAT | 2025-05-26T09:00:30Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-26T09:00:17Z | THEOLOGICAL EVIL PREVALENT TODAY |
Nana95/aimodel | Nana95 | 2025-05-26T08:57:40Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-26T08:43:14Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: aimodel
---
# Aimodel
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `aimodel` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "aimodel",
"lora_weights": "https://huggingface.co/Nana95/aimodel/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nana95/aimodel', weight_name='lora.safetensors')
image = pipeline('aimodel').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nana95/aimodel/discussions) to add images that show off what you’ve made with this LoRA.
|
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e3 | ahmedelgebaly | 2025-05-26T08:50:39Z | 14 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-04-25T14:03:05Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_e3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B # same model you originally used
# Load your previously fine-tuned model as a PEFT adapter
peft_model: ahmedelgebaly/llama-3.1-8b-squadv2_e3
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: train
test_datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_e3
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2-v0_SciQ_e3
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e3
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_e3
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7866 | 0.0305 | 1 | 1.8420 |
| 1.1314 | 0.2443 | 8 | 1.0979 |
| 0.8408 | 0.4885 | 16 | 0.9646 |
| 0.8669 | 0.7328 | 24 | 0.9339 |
| 0.8588 | 0.9771 | 32 | 0.9197 |
| 0.8363 | 1.2137 | 40 | 0.9090 |
| 0.8021 | 1.4580 | 48 | 0.9028 |
| 0.833 | 1.7023 | 56 | 0.8995 |
| 0.8083 | 1.9466 | 64 | 0.8951 |
| 0.8215 | 2.1832 | 72 | 0.8948 |
| 0.824 | 2.4275 | 80 | 0.8945 |
| 0.802 | 2.6718 | 88 | 0.8936 |
| 0.7762 | 2.9160 | 96 | 0.8935 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e2 | ahmedelgebaly | 2025-05-26T08:48:32Z | 16 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-04-25T14:02:55Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_e2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B # same model you originally used
# Load your previously fine-tuned model as a PEFT adapter
peft_model: ahmedelgebaly/llama-3.1-8b-squadv2_e2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: train
test_datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_e2
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2-v0_SciQ_e2
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e2
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 2
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_e2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7866 | 0.0305 | 1 | 1.8420 |
| 1.1295 | 0.2443 | 8 | 1.0980 |
| 0.8408 | 0.4885 | 16 | 0.9650 |
| 0.8677 | 0.7328 | 24 | 0.9346 |
| 0.8605 | 0.9771 | 32 | 0.9223 |
| 0.8401 | 1.2137 | 40 | 0.9130 |
| 0.8089 | 1.4580 | 48 | 0.9084 |
| 0.8434 | 1.7023 | 56 | 0.9068 |
| 0.8224 | 1.9466 | 64 | 0.9066 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
igzi/MNLP_document_encoder-finetuned | igzi | 2025-05-26T08:48:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2025-05-26T08:48:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phospho-app/jmota27-ACT-boat_cup_dataset-x65e4 | phospho-app | 2025-05-26T08:48:06Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
]
| null | 2025-05-26T06:27:35Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [jmota27/boat_cup_dataset](https://huggingface.co/datasets/jmota27/boat_cup_dataset)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 60
- **Training steps**: 8000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
liuhuanjim013/SmolVLM2-500M-Video-Instruct-video-feedback | liuhuanjim013 | 2025-05-26T08:46:38Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"smolvlm",
"image-text-to-text",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolVLM2-500M-Video-Instruct",
"base_model:finetune:HuggingFaceTB/SmolVLM2-500M-Video-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-22T01:54:26Z | ---
library_name: transformers
license: apache-2.0
base_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct
tags:
- generated_from_trainer
model-index:
- name: SmolVLM2-500M-Video-Instruct-video-feedback
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SmolVLM2-500M-Video-Instruct-video-feedback
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
AK2042/Phishing_Website_detector | AK2042 | 2025-05-26T08:46:06Z | 0 | 0 | sklearn | [
"sklearn",
"en",
"license:mit",
"region:us"
]
| null | 2025-05-26T07:23:06Z | ---
license: mit
language:
- en
library_name: sklearn
---
# Phishing Website Detection using Machine Learning & SSL Certificate Analysis
This project is a machine learning-based web application to detect phishing websites using both URL-based features and SSL certificate metadata. It uses a trained model and provides an easy-to-use **Gradio interface** to check whether a given link is **legitimate** or **phishing**.
---
## Features
* Accepts a raw URL as input
* Uses lexical URL features + SSL certificate metadata
* Extracts SSL features like issuer, validity period, and self-signed status
* Trained ML model (Random Forest / XGBoost / etc.) saved as a `.pkl` file
* Gradio web interface (no backend deployment needed)
* Fast and lightweight prediction
* Built using Kaggle-curated phishing URL dataset
---
## Project Structure
```
phishing-detector/
│
├── model/
│ └── phishing_model.pkl # Trained ML model
│
├── app.py # Main Gradio app
├── feature_extraction.py # Lexical feature extractor for URLs
├── train_model.py # (Optional) Script to retrain model
│
├── README.md # You are here!
└── requirements.txt # Python dependencies
```
---
## How It Works
1. User inputs a URL.
2. `feature_extraction.py` extracts URL-based features (length, special chars, etc.).
3. Features are fed into a trained ML model (`phishing_model.pkl`).
4. Output shown on Gradio UI: **Legit** or **Phishing**
---
## Setup & Run
### 1. Clone the Repository
```bash
git clone https://github.com/AK2042/Phishing_Website_detector.git
cd phishing-detector
```
### 2. Install Dependencies
```bash
pip install -r requirements.txt
```
### 3. Run the App
```bash
python app.py
```
Gradio will open the app in your browser at `http://127.0.0.1:7860`.
---
## Model Training (Optional)
To retrain the model with new data:
```bash
python train_model.py
```
This will generate a new `phishing_model.pkl`.
link to dataset: https://www.kaggle.com/datasets/eswarchandt/phishing-website-detector
## Dependencies
* `scikit-learn`
* `gradio`
* `OpenSSL`
* `tldextract`
* `pandas`, `numpy`
---
## References
* [PhishTank Dataset](https://www.phishtank.com/)
* [Kaggle Phishing URLs Dataset](https://www.kaggle.com/datasets)
* [Gradio Docs](https://gradio.app/)
---
## License
MIT License. Use freely with credit. |
bigband/ProteanEreshkigal | bigband | 2025-05-26T08:42:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T08:32:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
cleathley-dapth/bert-phishing-classifier-teacher | cleathley-dapth | 2025-05-26T06:26:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-26T06:24:10Z | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-phishing-classifier-teacher
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-phishing-classifier-teacher
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2894
- Accuracy: 0.878
- Auc: 0.951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|
| 0.5004 | 1.0 | 263 | 0.3824 | 0.811 | 0.909 |
| 0.3798 | 2.0 | 526 | 0.3567 | 0.833 | 0.934 |
| 0.3914 | 3.0 | 789 | 0.3284 | 0.838 | 0.943 |
| 0.3755 | 4.0 | 1052 | 0.4358 | 0.809 | 0.941 |
| 0.3415 | 5.0 | 1315 | 0.3250 | 0.864 | 0.945 |
| 0.3378 | 6.0 | 1578 | 0.3317 | 0.864 | 0.946 |
| 0.32 | 7.0 | 1841 | 0.2918 | 0.882 | 0.948 |
| 0.3321 | 8.0 | 2104 | 0.2912 | 0.882 | 0.95 |
| 0.3102 | 9.0 | 2367 | 0.2868 | 0.873 | 0.951 |
| 0.3186 | 10.0 | 2630 | 0.2894 | 0.878 | 0.951 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.7.0+cu118
- Datasets 3.6.0
- Tokenizers 0.21.1
|
soob3123/GrayLine-Qwen3-14B-Planner | soob3123 | 2025-05-26T06:23:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2025-05-26T06:23:10Z | ---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** soob3123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ViRAL-Nimra-Mehra-Video-Leaks/Original.Full.Clip.Nimra.Mehra.Viral.Video.Link.Official | ViRAL-Nimra-Mehra-Video-Leaks | 2025-05-26T06:19:33Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-26T06:19:20Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
TheGardener/Llama-0.8B-shortened-llama | TheGardener | 2025-05-26T06:17:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T06:14:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nerva1228/kxnainai1 | Nerva1228 | 2025-05-26T06:17:29Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-26T02:18:36Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: kxnainai1
---
# Kxnainai1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `kxnainai1` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "kxnainai1",
"lora_weights": "https://huggingface.co/Nerva1228/kxnainai1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/kxnainai1', weight_name='lora.safetensors')
image = pipeline('kxnainai1').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/kxnainai1/discussions) to add images that show off what you’ve made with this LoRA.
|
Centk/task-9-google-gemma-2b | Centk | 2025-05-26T06:11:31Z | 655 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
]
| null | 2025-05-10T09:19:20Z | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
rendoo/06_rendoo_05_972 | rendoo | 2025-05-26T06:07:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T05:57:45Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
Wuhall/xlm-roberta-base-cls | Wuhall | 2025-05-26T06:03:10Z | 0 | 0 | null | [
"safetensors",
"xlm-roberta",
"zh",
"en",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
]
| null | 2025-05-26T05:57:23Z | ---
license: mit
language:
- zh
- en
base_model:
- FacebookAI/xlm-roberta-base
---
{"eval_loss": 0.02062925696372986, "eval_accuracy": 0.9971910112359551, "eval_runtime": 9.3475, "eval_samples_per_second": 76.17, "eval_steps_per_second": 4.814, "epoch": 4.0} |
jeongseokoh/llama3-8b-with-conclusion-Alphabet_False_Multiple3_aggr_last_starting_with_inst | jeongseokoh | 2025-05-26T06:03:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-25T13:50:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rendoo/05_rendoo_05_159 | rendoo | 2025-05-26T05:51:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T05:41:39Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
lyu-boxuan/T5-sMBR-PP-ZH | lyu-boxuan | 2025-05-26T05:44:25Z | 0 | 0 | null | [
"safetensors",
"mt5",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T03:10:24Z | ---
license: apache-2.0
---
|
VIDEO-18-Koko-Viral-Video/wATCH.Koko.Viral.Video.Original.Link.Official | VIDEO-18-Koko-Viral-Video | 2025-05-26T05:41:43Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-26T05:41:30Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
nyrasea/mongolia | nyrasea | 2025-05-26T05:41:25Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T05:03:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KIKU315/my-new-shiny-tokenizer | KIKU315 | 2025-05-26T05:40:09Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T05:40:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
enosislabs/midnight-mini-high-thinking-exp | enosislabs | 2025-05-26T05:36:11Z | 48 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen3",
"text-generation",
"qwen",
"qwen3-4b",
"unsloth",
"midnight-ai",
"enosis-labs",
"code-generation",
"mathematics",
"reasoning",
"fine-tuned",
"MMLU",
"HumanEval",
"HellaSwag",
"Winogrande",
"LAMBADA",
"CEVAL",
"conversational",
"en",
"es",
"zh",
"dataset:enosislabs/math-mini-shareGPT",
"dataset:enosislabs/midnight-mini-think-shareGPT",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-25T03:25:07Z | ---
license: apache-2.0
language:
- en
- es
- zh
tags:
- qwen
- qwen3-4b
- unsloth
- midnight-ai
- enosis-labs
- text-generation
- code-generation
- mathematics
- reasoning
- fine-tuned
- MMLU
- HumanEval
- HellaSwag
- Winogrande
- LAMBADA
- CEVAL
pipeline_tag: text-generation
model_name: Midnight Mini High Thinking
model_id: enosislabs/midnight-mini-high-thinking-exp
base_model: Qwen/Qwen3-4B
datasets:
- enosislabs/math-mini-shareGPT
- enosislabs/midnight-mini-think-shareGPT
library_name: transformers
---
# Midnight Mini High Thinking: Efficient Reasoning Architecture
**Model ID:** `midnight-mini-high-thinking-05-25`
**Developed by:** Enosis Labs AI Research Division
**Model Version:** 05-25 (Production Release)
**Base Architecture:** Qwen3-4B
## Executive Summary
Midnight Mini High Thinking is a state-of-the-art causal language model engineered for complex reasoning applications within enterprise environments. This 4-billion parameter architecture delivers sophisticated analytical capabilities through advanced fine-tuning methodologies, demonstrating superior performance in mathematical computation, logical reasoning, and code synthesis tasks while maintaining computational efficiency for production deployment.
## Technical Specifications
### Core Architecture
- **Base Model:** Qwen/Qwen3-4B
- **Parameter Count:** 4.02 billion trainable parameters
- **Model Type:** Autoregressive Transformer (Causal Language Model)
- **Fine-tuning Framework:** Unsloth optimization pipeline
- **Quantization Support:** Native 16-bit precision, GGUF quantized variants (Q4_K_M, Q5_K_M, Q8_0)
- **Maximum Context Length:** 32,768 tokens
- **Vocabulary Size:** 151,936 tokens
- **Attention Heads:** 32 (Multi-Head Attention)
- **Hidden Dimensions:** 2,048
- **Feed-Forward Network Dimensions:** 11,008
### Performance Characteristics
The model architecture incorporates several advanced optimizations:
- **Enhanced Attention Mechanisms:** Specialized for multi-step reasoning workflows with improved long-range dependency modeling
- **Parameter-Efficient Fine-Tuning:** Utilizing LoRA (Low-Rank Adaptation) and QLoRA techniques for optimal training efficiency
- **Memory Optimization:** Gradient checkpointing and mixed-precision training for reduced memory footprint during inference
- **Inference Optimization:** Native support for key-value cache optimization and dynamic batching
### Deployment Formats
#### 16-bit Precision Model
- **Memory Requirements:** ~8GB VRAM (inference)
- **Inference Speed:** ~150-200 tokens/second (RTX 4090)
- **Precision:** Full fp16 precision for maximum accuracy
#### GGUF Quantized Variants
- **Q4_K_M:** 2.6GB, optimal balance of quality and efficiency
- **Q5_K_M:** 3.2GB, enhanced quality with moderate compression
- **Q8_0:** 4.3GB, near-original quality with minimal compression
## Core Capabilities & Design Objectives
Midnight Mini High Thinking is specifically engineered for enterprise applications requiring sophisticated analytical capabilities:
### Primary Capabilities
- **Advanced Multi-Step Reasoning:** Demonstrates exceptional performance in complex logical sequences requiring iterative analysis and synthesis
- **Mathematical Computation & Analysis:** Excels in advanced mathematical operations, theorem proving, and quantitative analysis
- **Code Generation & Software Engineering:** Proficient in generating, debugging, and optimizing code across multiple programming languages
- **Technical Documentation Processing:** Advanced comprehension and generation of technical documentation, research papers, and analytical reports
- **Multilingual Intelligence:** Primary optimization for English with demonstrated capabilities in Spanish and Chinese for specialized tasks
### Design Principles
- **Ethical AI Framework:** Integrated safety mechanisms for responsible AI deployment
- **Bias Mitigation:** Advanced training protocols designed to minimize harmful biases and promote equitable outputs
- **Computational Efficiency:** Optimized for production environments with resource-conscious design
- **Scalability:** Architecture designed for horizontal scaling in enterprise deployments
## Enterprise Applications & Use Cases
Midnight Mini High Thinking is architected for professional environments requiring sophisticated analytical capabilities:
### Primary Application Domains
- **Advanced Mathematical Research:** Complex problem solving, theorem verification, mathematical proof assistance, and quantitative analysis
- **Software Engineering & Development:** Code generation, debugging assistance, architecture planning, and technical documentation
- **Business Intelligence & Analytics:** Data analysis interpretation, report generation, and strategic decision support
- **Academic Research Support:** Literature analysis, research methodology assistance, and technical writing enhancement
- **Educational Technology:** Advanced tutoring systems, curriculum development, and personalized learning assistance
### Implementation Examples
#### Mathematical Analysis Implementation
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Initialize model with optimized settings
model_id = "enosislabs/midnight-mini-high-thinking-05-25"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Mathematical reasoning example
prompt = """Analyze the convergence properties of the Taylor series for e^x around x=0.
Provide a rigorous mathematical explanation including convergence radius and error bounds."""
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=400,
temperature=0.7,
do_sample=True,
top_p=0.9
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Mathematical Analysis:\n{response}")
```
#### Code Generation & Technical Documentation
```python
# Advanced code generation with documentation
coding_prompt = """Design a Python class for implementing a thread-safe LRU cache
with TTL (time-to-live) functionality. Include comprehensive documentation
and error handling."""
inputs = tokenizer(coding_prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=500,
temperature=0.3,
do_sample=True
)
code_response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Generated Solution:\n{code_response}")
```
## Training Methodology & Data Engineering
### Training Infrastructure
- **Base Model:** Qwen/Qwen3-4B
- **Fine-tuning Framework:** Unsloth optimization pipeline with custom extensions
- **Hardware Configuration:** Multi-GPU training environment (A100 80GB clusters)
- **Training Duration:** 72 hours of optimized training across distributed systems
- **Optimization Strategy:** Parameter-efficient fine-tuning with LoRA and gradient accumulation
### Dataset Composition & Curation
The training regimen incorporates a proprietary, meticulously curated dataset collection designed to enhance analytical capabilities:
- **Mathematical Reasoning Corpus:** Advanced mathematical problems, proofs, and analytical reasoning chains
- **Code Generation Suite:** Multi-language programming challenges with comprehensive documentation requirements
- **Technical Documentation Archive:** Scientific papers, technical specifications, and analytical reports
- **Ethical Alignment Dataset:** Carefully curated examples promoting responsible AI behavior and bias mitigation
- **Multilingual Reasoning Collection:** Cross-linguistic reasoning tasks with emphasis on knowledge transfer
### Training Optimization Techniques
- **Gradient Checkpointing:** Memory-efficient training enabling larger effective batch sizes
- **Mixed Precision Training:** FP16 optimization for accelerated training without precision loss
- **Dynamic Learning Rate Scheduling:** Adaptive learning rate adjustment based on validation performance
- **Regularization Strategies:** Dropout, weight decay, and label smoothing for improved generalization
## Performance Benchmarks & Evaluation Results
Midnight Mini High Thinking has undergone comprehensive evaluation across industry-standard benchmarks, demonstrating exceptional performance characteristics for its parameter class.
### Benchmark Results Overview
| Benchmark Category | Task Specification | Metric | Score | Standard Error |
|:-------------------|:-------------------|:-------|:------|:---------------|
| **Code Generation** | | | | |
| | HumanEval | `pass@1` | 0.5920 | ±0.0389 |
| **Common Sense Reasoning** | | | | |
| | HellaSwag | `acc` | 0.5074 | ±0.0050 |
| | | `acc_norm` | 0.6782 | ±0.0047 |
| | Winogrande | `acc` | 0.6748 | ±0.0132 |
| **Language Modeling** | | | | |
| | LAMBADA OpenAI (English) | `acc` | 0.6218 | ±0.0068 |
| | | `perplexity` | 5.8048 | ±0.1720 |
| **Knowledge & Reasoning** | | | | |
| | MMLU (English) - General | `acc` | 0.6920 | ±0.0453 |
| | MMLU (English) - STEM | `acc` | 0.5870 | ±0.0734 |
| | MMLU (Spanish) - General | `acc` | 0.6050 | ±0.0246 |
| | MMLU (Spanish) - STEM | `acc` | 0.6304 | ±0.0720 |
| **Specialized Knowledge** | | | | |
| | CEVAL - Advanced Mathematics | `acc` | 0.5863 | ±0.1177 |
### Performance Analysis
**Code Generation Excellence:** The 59.2% pass@1 score on HumanEval demonstrates superior code synthesis capabilities, positioning the model among the top performers in its parameter class for software engineering applications.
**Knowledge Integration:** MMLU performance of 69.2% (English) indicates strong knowledge retention and application across diverse domains, with particularly notable STEM performance in Spanish (63.04%) suggesting effective cross-linguistic knowledge transfer.
**Reasoning Capabilities:** Winogrande accuracy of 67.48% and HellaSwag normalized accuracy of 67.82% demonstrate robust common-sense reasoning and contextual understanding.
**Mathematical Proficiency:** CEVAL mathematics performance of 58.63% showcases specialized mathematical reasoning capabilities, particularly valuable for technical and scientific applications.
## Model Limitations & Risk Assessment
### Technical Constraints
- **Knowledge Temporal Boundary:** Training data cutoff limits real-time information access and contemporary knowledge integration
- **Computational Resource Requirements:** 4B parameter architecture demands significant computational resources for optimal performance
- **Context Window Limitations:** 32,768 token limit may constrain processing of extremely large documents or extended conversations
- **Quantization Trade-offs:** GGUF variants exhibit quality degradation proportional to compression level
### Performance Limitations
- **Hallucination Potential:** Like all large language models, may generate factually incorrect or logically inconsistent outputs
- **Domain-Specific Accuracy:** Performance varies across specialized domains; validation recommended for critical applications
- **Language Proficiency Variance:** Optimal performance in English with graduated capabilities in Spanish and Chinese
- **Reasoning Depth Constraints:** Complex multi-step reasoning may occasionally exhibit logical gaps or incomplete analysis
### Bias & Fairness Considerations
- **Training Data Bias Inheritance:** May reflect societal biases present in training corpora despite mitigation efforts
- **Cultural Context Limitations:** Responses may exhibit Western-centric perspectives due to training data composition
- **Demographic Representation:** Potential underrepresentation of certain demographic groups in training examples
- **Professional Domain Bias:** May exhibit preferences toward certain professional or academic perspectives
## Ethical Framework & Responsible AI Implementation
### Safety Mechanisms
- **Content Safety Filters:** Integrated mechanisms to identify and refuse harmful content generation
- **Bias Detection & Mitigation:** Ongoing monitoring for discriminatory outputs with corrective measures
- **Harmful Use Prevention:** Design features to discourage malicious applications and misuse
- **Privacy Protection:** No retention of user inputs or personal data during inference
### Deployment Guidelines
- **Human Oversight Requirement:** Critical decisions should maintain human validation and review
- **Domain-Specific Validation:** Professional applications require subject matter expert verification
- **Continuous Monitoring:** Regular assessment of outputs for quality and ethical compliance
- **User Education:** Clear communication of model capabilities and limitations to end users
### Research Ethics Compliance
Development adheres to established AI research ethics principles:
- **Beneficence:** Designed to augment human capabilities and provide positive societal impact
- **Non-maleficence:** Active measures to prevent harmful applications and negative consequences
- **Autonomy:** Respects user agency while providing transparent information about model behavior
- **Justice:** Efforts to ensure equitable access and fair treatment across user populations
## Technical Support & Model Citation
### Model Attribution
When utilizing Midnight Mini High Thinking in research or production environments, please cite:
```bibtex
@software{midnight_mini_high_thinking_2025,
author = {Enosis Labs AI Research Division},
title = { Midnight Mini High Thinking: Efficient Reasoning Architecture},
version = {05-25},
year = {2025},
publisher = {Enosis Labs},
url = {https://huggingface.co/enosislabs/midnight-mini-high-thinking-exp}
}
```
### Technical Support Channels
For technical inquiries, deployment assistance, or research collaboration:
- **Primary Contact:** <[email protected]>
- **Model Repository:** [Hugging Face Model Hub](https://huggingface.co/enosislabs/midnight-mini-high-thinking-exp)
### License & Distribution
Licensed under Apache 2.0, permitting commercial use, modification, and distribution with appropriate attribution.
---
**Enosis Labs AI Research Division**
*Advancing the frontiers of artificial intelligence through responsible innovation* |
duythanh1022/finetune-clip-flickr8-vi | duythanh1022 | 2025-05-26T05:35:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:ybelkada/blip2-opt-2.7b-fp16-sharded",
"base_model:adapter:ybelkada/blip2-opt-2.7b-fp16-sharded",
"region:us"
]
| null | 2025-05-26T03:21:31Z | ---
library_name: peft
base_model: ybelkada/blip2-opt-2.7b-fp16-sharded
tags:
- generated_from_trainer
model-index:
- name: finetune-clip-flickr8-vi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-clip-flickr8-vi
This model is a fine-tuned version of [ybelkada/blip2-opt-2.7b-fp16-sharded](https://huggingface.co/ybelkada/blip2-opt-2.7b-fp16-sharded) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1292 | 0.5 | 200 | 1.0219 |
| 1.0023 | 1.0 | 400 | 0.9269 |
| 0.951 | 1.5 | 600 | 0.8907 |
| 0.9298 | 2.0 | 800 | 0.8792 |
### Framework versions
- PEFT 0.15.2.dev0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
cslxx/PSD | cslxx | 2025-05-26T05:33:58Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T05:33:58Z | ---
license: apache-2.0
---
|
dfafdsaf/deberta_sentiment_5000 | dfafdsaf | 2025-05-26T05:32:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-25T17:58:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dhruvsangani/Sentiment_Analysis_of_Banking_Dataset | dhruvsangani | 2025-05-26T05:30:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T15:06:02Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dhruvsangani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tegarganang/MQware | tegarganang | 2025-05-26T05:28:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-24T15:07:18Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jyoung105/ent2_t13 | jyoung105 | 2025-05-26T05:26:18Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-26T05:26:16Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Ent2_T13
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/jyoung105/ent2_t13/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jyoung105/ent2_t13', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/jyoung105/ent2_t13/discussions) to add images that show off what you’ve made with this LoRA.
|
Ash2749/trial3.1_8b | Ash2749 | 2025-05-26T05:21:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T05:19:00Z | ---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Ash2749
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
g-assismoraes/gemma-3-1b-it-agnews | g-assismoraes | 2025-05-26T05:16:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T02:33:58Z | ---
library_name: transformers
license: gemma
base_model: google/gemma-3-1b-it
tags:
- generated_from_trainer
model-index:
- name: gemma-3-1b-it-agnews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-3-1b-it-agnews
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1073 | 1.0 | 27000 | 1.1091 |
| 1.0571 | 2.0 | 54000 | 1.1085 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
HajimeOgawa/gemma3-7b-mbti-chat-energy | HajimeOgawa | 2025-05-26T05:14:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T05:09:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
andyrdt/rl_loans | andyrdt | 2025-05-26T05:10:51Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T04:35:04Z | ---
license: apache-2.0
---
This repository contains models from the blog post [Do models say what they learn?](https://www.lesswrong.com/posts/abtegBoDfnCzewndm/do-models-say-what-they-learn).
Training code is available [here](https://github.com/andyrdt/rl_loans).
|
New-tutorial-Shruthi-Narayanan-Viral-Video/FULL.VIDEO.LINK.Bella.Shruti.Narayanan.Viral.Video.Leaks.Official | New-tutorial-Shruthi-Narayanan-Viral-Video | 2025-05-26T05:07:47Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-26T05:07:26Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
chaimachabir/lora-data1-data2-tinyllama | chaimachabir | 2025-05-26T05:01:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T03:52:42Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- generated_from_trainer
model-index:
- name: lora-data1-data2-tinyllama
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora-data1-data2-tinyllama
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1 |
dfafdsaf/deberta_sentiment_10000 | dfafdsaf | 2025-05-26T04:57:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-26T04:50:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
New-tutorial-Warung-Madura-Viral-Video/FULL.VIDEO.LINK.Madura.Viral.Video.Leaks.Official | New-tutorial-Warung-Madura-Viral-Video | 2025-05-26T04:52:06Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-26T04:51:45Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
takahashi111/Qwen3-4B-unsloth-bnb-4bit_hiragana2katakana_20250524_checkpoint-16830 | takahashi111 | 2025-05-26T04:51:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T04:51:31Z | ---
base_model: unsloth/Qwen3-4B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** takahashi111
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
soonil/test-gemma-3-12b-it-04 | soonil | 2025-05-26T04:48:39Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"gemma3",
"en",
"base_model:unsloth/gemma-3-12b-it",
"base_model:finetune:unsloth/gemma-3-12b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T04:48:38Z | ---
base_model: unsloth/gemma-3-12b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** soonil
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-12b-it
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PonyDing/Mayishenxiang-Llama-model-8B | PonyDing | 2025-05-26T04:45:38Z | 0 | 0 | null | [
"gguf",
"llama",
"算命",
"预测",
"text-generation",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-05-23T06:04:28Z | ---
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
tags:
- 算命
- 预测
--- |
dfafdsaf/deberta_sentiment_50000 | dfafdsaf | 2025-05-26T04:45:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-25T18:00:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dfafdsaf/deberta_sentiment_30000 | dfafdsaf | 2025-05-26T04:43:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-26T04:32:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
test-gen/qwen2-3b-unique_lr1e-5 | test-gen | 2025-05-26T04:34:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2025-05-26T04:21:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Intel/Qwen3-8B-int4-AutoRound-gptq-inc | Intel | 2025-05-26T04:29:22Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"dataset:NeelNanda/pile-10k",
"arxiv:2309.05516",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"4-bit",
"gptq",
"region:us"
]
| null | 2025-05-26T04:20:05Z | ---
license: apache-2.0
datasets:
- NeelNanda/pile-10k
base_model:
- Qwen/Qwen3-8B
---
## Model Details
This model is an int4 model with group_size 128 and symmetric quantization of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) generated by [intel/auto-round](https://github.com/intel/auto-round).
## How To Use
### INT4 Inference(CPU/CUDA/INTEL GPU)
```python
from transformers import AutoModelForCausalLM,AutoTokenizer
quantized_model_dir = "Intel/Qwen3-8B-int4-AutoRound-gptq-inc"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)
model = AutoModelForCausalLM.from_pretrained(
quantized_model_dir,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512, ##change this to align with the official usage
do_sample=False ##change this to align with the official usage
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
##INT4:
# thinking content: <think>
# Okay, the user is asking for a short introduction to large language models. Let me start by recalling what I know about them. Large language models are a type of AI that can process and generate human-like text. They're based on deep learning, right? I should mention their training process, using massive datasets. Maybe explain how they work with neural networks, like transformer architectures. Also, their applications are important—like answering questions, writing, coding. But I need to keep it concise. Wait, the user wants a short intro, so I shouldn't go into too much detail. Let me structure it: start with the definition, mention the training data, the technology (transformers), and then the applications. Also, maybe touch on their capabilities, like understanding context and generating coherent text. Oh, and maybe note that they're used in various fields. I should avoid jargon but still be accurate. Let me check if I'm missing anything. Oh, maybe mention that they're pre-trained on a lot of text, which allows them to handle multiple tasks. Yeah, that's a key point. Alright, time to put it all together in a clear, concise way.
# </think>
# content: Large language models (LLMs) are advanced AI systems trained on vast amounts of text data to understand and generate human-like language. Built using deep learning techniques, particularly transformer architectures, they process and analyze patterns in text to perform tasks like answering questions, writing stories, coding, and more. These models leverage extensive training data to grasp context, syntax, and semantics, enabling them to engage in complex conversations and adapt to diverse applications across fields like education, healthcare, and technology. Their ability to generate coherent, context-aware responses makes them a cornerstone of modern natural language processing.
##BF16:
# thinking content: <think>
# Okay, the user wants a short introduction to large language models. Let me start by defining what they are. They're AI systems trained on vast amounts of text data, right? I should mention their ability to understand and generate human-like text. Maybe include examples like GPT or BERT. Also, highlight their applications in tasks like answering questions, writing, coding, and more. Keep it concise but cover the key points: training data, capabilities, and use cases. Avoid technical jargon to keep it accessible. Let me check if I need to mention the scale of the models, like the number of parameters. That's important for context. Oh, and maybe touch on how they process different languages. Wait, the user said "short," so I shouldn't go into too much detail. Let me structure it: definition, training, capabilities, applications. That should cover it. Make sure it's clear and to the point.
# </think>
# content: Large language models (LLMs) are advanced AI systems trained on vast amounts of text data to understand and generate human-like language. They can answer questions, write stories, code, translate languages, and perform various tasks by analyzing patterns in the data. These models, like GPT or BERT, leverage massive datasets and complex algorithms to produce coherent, context-aware responses, making them powerful tools for communication, creativity, and problem-solving across multiple domains.
prompt = "9.11和9.8哪个数字大"
##INT4:
# thinking content:
# content: <think>
# 好的,我现在需要比较9.11和9.8哪个数字更大。首先,我应该回忆一下小数比较的方法。通常,比较小数的时候,我们会从左到右逐位比较,先看整数部分,如果整数部分相同,再比较小数部分。
# 首先,这两个数的整数部分都是9,所以整数部分相同。接下来比较小数部分。9.11的小数部分是0.11,而9.8的小数部分是0.8。这时候我需要比较0.11和0.8的大小。
# 为了更清楚地比较,我可以把它们转换成同一位数的小数。比如,0.11可以看作0.110,而0.8可以看作0.800。这样,比较每一位:
# 第一位小数:0.110的十分位是1,而0.800的十分位是8。显然,8比1大,所以0.800比0.110大。因此,9.8比9.11大。
# 不过,我是不是应该再检查一下?比如,有没有可能在比较小数时出现其他情况?比如,如果小数位数不同,是否需要补零?比如,9.11和9.8,后者的小数位数更少,所以补零到相同位数的话,确实是0.110和0.800,这样比较更直观。
# 或者,我可以将它们转换为分数来比较。9.11等于9又11/100,而9.8等于9又80/100。显然,80/100比11/100大,所以9.8更大。
# 另外,也可以用数值的大小来直观比较。比如,9.8等于9.80,而9.11是9.11,显然9.80比9.11大,因为小数点后第一位8比1大。
# 不过,有没有可能我哪里弄错了?比如,是否在比较小数时应该先比较整数部分,然后小数部分?是的,没错。整数部分相同的情况下,比较小数部分,所以正确。
# 或者,有没有可能把9.8看成9.80,而9.11是9.11,所以
##BF16:
# thinking content:
# content: <think>
# 嗯,用户问的是9.11和9.8哪个数字大。首先,我需要确认这两个数字的结构。9.11是一个小数,而9.8也是一个小数。看起来都是以9开头,但后面的小数部分不同。
# 首先,我应该比较整数部分。两个数的整数部分都是9,所以这时候需要比较小数部分。小数部分的话,9.11的小数部分是0.11,而9.8的小数部分是0.8。这时候可能需要将它们转换成相同的位数来比较,或者直接比较小数点后的数字。
# 不过,可能用户不太清楚小数点后的位数如何比较。比如,9.11有两位小数,而9.8只有一位。这时候可能需要补零,让它们的小数位数相同。比如,9.8可以写成9.80,这样比较的话,0.80和0.11。这时候,0.80比0.11大,所以9.8比9.11大。
# 不过,也有可能用户会误以为小数点后的位数多的数字更大,比如认为9.11比9.8大,因为有两位小数。但其实小数点后的位数并不影响大小,而是要看每一位的数值。比如,第一位小数是十分位,第二位是百分位。所以,比较的时候,先看十分位,9.8的十分位是8,而9.11的十分位是1,显然8比1大,所以9.8更大。
# 不过,也有可能用户会混淆小数点后的位数,比如认为9.11的百分位是1,而9.8的百分位是0,所以可能误以为9.11更大。这时候需要明确说明比较的步骤,先比较整数部分,再比较小数部分,逐位比较。
# 另外,可能用户是在学习小数比较,所以需要详细解释步骤,确保他们理解如何正确比较小数。比如,先看整数部分,如果整数部分相同,再比较小数部分,从左到右逐位比较,直到找到不同的数字为止。
# 总结一下,正确的比较方法是:整数部分相同,比较小数部分,十分位8比1大,所以9.8更大。因此,答案应该是9.
prompt = "How many r in word strawberry"
##INT4:
# thinking content:
# content: <think>
# Okay, let's see. The user is asking how many times the letter 'r' appears in the word "strawberry". Hmm, first I need to make sure I spell the word correctly. Let me write it out: S-T-R-A-W-B-E-R-R-Y. Wait, is that right? Let me check again. S-T-R-A-W-B-E-R-R-Y. Yes, that's strawberry. Now, I need to count the number of 'r's in there.
# Starting from the beginning: S is the first letter, then T, then R. So that's one 'r'. Then the next letters are A, W, B, E, R, R, Y. Wait, after the first R, there's another R later. Let me break it down letter by letter.
# S - no
# T - no
# R - yes, that's the first one
# A - no
# W - no
# B - no
# E - no
# R - second one
# R - third one
# Y - no
# So that's three 'r's? Wait, let me count again. The word is S-T-R-A-W-B-E-R-R-Y. So positions:
# 1. S
# 2. T
# 3. R
# 4. A
# 5. W
# 6. B
# 7. E
# 8. R
# 9. R
# 10. Y
# So the 'r's are at positions 3, 8, and 9. That's three 'r's. Wait, but sometimes people might miscount. Let me check once more. The word is strawberry. Breaking it down:
# Strawberry: S-T-R-A-W-B-E-R-R-Y. Yes, so the third letter is R, then after the E, there are two R's. So that's three R's. So the answer should be 3. But wait, maybe I'm missing something. Let me write it out again:
# S T R A W B E R R Y
# So letters:
# 1. S
# 2. T
# 3. R
# 4. A
# 5. W
# 6. B
# 7. E
# 8. R
# 9. R
# 10. Y
# Yes, three R's. So the answer is 3. I think that's correct. I don't see any other R's. Maybe the user is testing if I can count correctly, so I need to make sure. Let me check an online source or
##BF16:
# thinking content:
# content: <think>
# Okay, let's see. The user is asking how many times the letter 'r' appears in the word "strawberry". Hmm, first I need to make sure I have the correct spelling of the word. Let me write it out: S-T-R-A-W-B-E-R-R-Y. Wait, is that right? Let me check again. Strawberry... yes, that's correct. Now, I need to count the number of 'r's in that spelling.
# Let me go through each letter one by one. Starting with the first letter: S. Not an 'r'. Next is T. Still not. Then R. Okay, that's one. Then A, W, B, E. So far, only one 'r'. Then comes the next letters: R. That's the second 'r'. Then another R. Wait, is there a third 'r'? Let me check again. The word is S-T-R-A-W-B-E-R-R-Y. So after the first R, there's a B, E, then two R's. So that's two R's? Wait, no. Let me count again. Let's break it down:
# 1. S
# 2. T
# 3. R (1st)
# 4. A
# 5. W
# 6. B
# 7. E
# 8. R (2nd)
# 9. R (3rd)
# 10. Y
# Wait, so the letters are S, T, R, A, W, B, E, R, R, Y. So the 'r's are at positions 3, 8, and 9. That's three 'r's. But wait, maybe I miscounted. Let me write it out again:
# S-T-R-A-W-B-E-R-R-Y. So after the first R (position 3), then the next letters are A, W, B, E, then R (position 8), then another R (position 9). So that's three R's. But sometimes people might miss the second R. Let me check again. The word is strawberry. Let me spell it again: S-T-R-A-W-B-E-R-R-Y. Yes, that's correct. So the R's are in the third, eighth, and ninth positions. Therefore, there are three 'r's. Wait, but I thought maybe the answer was two. Maybe I need to confirm. Let me check an online source or think of another way. Alternatively, maybe
prompt = "请简短介绍一下阿里巴巴公司"
##INT4:
# thinking content: <think>
# 好的,用户让我简短介绍一下阿里巴巴公司。首先,我需要确定用户的需求是什么。他们可能是在做研究,或者想了解阿里巴巴的基本信息,或者只是好奇。不管怎样,简短是关键,所以我要抓住核心点。
# 阿里巴巴成立于1999年,由马云创立,这点必须提到。然后是它的主要业务,比如淘宝、天猫、支付宝这些大家比较熟悉的平台。可能还要提到它的国际业务,比如阿里国际站和Lazada,显示其全球影响力。
# 另外,阿里巴巴集团的结构,比如旗下的不同子公司,比如淘宝、天猫、菜鸟网络、阿里云等,这些可以简要带过。还有它的愿景,比如成为全球最大的电子商务公司,以及在科技和创新方面的投入,比如阿里云和大数据技术。
# 用户可能还想知道阿里巴巴的影响力,比如用户数量、市场份额,或者它在科技领域的成就。不过因为要简短,可能不需要太多数据,但可以提到它是中国最大的互联网公司之一,全球知名的电商平台。
# 还要注意避免太技术性的术语,保持简洁易懂。可能需要检查是否有遗漏的重要信息,比如阿里巴巴的上市时间(2014年),或者其在社交媒体和物流方面的业务,比如菜鸟网络。不过这些可能属于次要信息,可以简略带过。
# 最后,确保整体结构清晰,先介绍成立时间、创始人,然后主要业务,再提到国际业务和科技贡献,最后总结其地位和愿景。这样用户就能快速了解阿里巴巴的基本情况。
# </think>
# content: 阿里巴巴集团(Alibaba Group)成立于1999年,由马云创立,是中国乃至全球最具影响力的互联网企业之一。其核心业务涵盖电子商务(如淘宝、天猫)、数字支付(支付宝)、云计算(阿里云)、物流(菜鸟网络)及全球化零售(Lazada、阿里国际站)等领域。阿里巴巴致力于通过技术创新推动商业变革,现已成为全球最大的电商平台之一,并在人工智能、大数据等前沿科技领域持续布局。2014年,其在美国纽约证券交易所上市(股票代码:BABA)。
##BF16:
# thinking content: <think>
# 嗯,用户让我简短介绍一下阿里巴巴公司。首先,我需要确定用户的需求是什么。可能他们只是想了解阿里巴巴的基本信息,或者是在做作业、写报告,或者只是好奇。不管怎样,用户需要的是简洁明了的回答,不能太冗长。
# 接下来,我得回忆一下阿里巴巴的基本信息。阿里巴巴集团成立于1999年,总部在杭州,对吧?创始人是马云,不过现在可能已经卸任了。阿里巴巴主要业务包括电商平台,比如淘宝、天猫,还有菜鸟网络、支付宝、阿里云这些子公司。可能还需要提到它的全球影响力,比如在电子商务、云计算、金融科技方面的成就。
# 用户可能还想知道阿里巴巴的规模,比如员工数量、市值,或者它在行业中的地位。不过用户要求简短,所以这些细节可能需要简化。另外,可能需要提到它的使命,比如让天下没有难做的生意,或者推动数字化转型。
# 还要注意避免错误信息,比如马云是否还在任。我记得他已经在2019年卸任董事局主席,现在是董事会成员。另外,阿里巴巴的上市时间是2014年,港交所和纽交所都有。可能需要提到这些关键点。
# 用户可能没有说出来的深层需求是希望了解阿里巴巴的核心业务和影响力,而不仅仅是表面信息。所以需要突出其作为全球最大的电商平台之一,以及在云计算和金融科技领域的领先地位。同时,可能还要提到其社会责任,比如环保、公益项目等。
# 需要确保回答结构清晰,分点列出,但用户要求简短,所以可能用一两句话概括。比如成立时间、总部、创始人、主要业务、全球地位等。还要注意用词准确,比如“集团”而不是“公司”,因为阿里巴巴有多个子公司。
# 最后检查有没有遗漏的重要信息,比如是否提到支付宝和阿里云,这些是阿里巴巴的重要组成部分。同时,可能需要提到其对中国经济和全球电商的影响,但保持简洁。确保没有错误,比如成立年份、上市地点等。这样用户就能得到一个全面但简短的介绍。
# </think>
# content: 阿里巴巴集团(Alibaba Group)成立于1999年,总部位于中国杭州,由马云创立。它是全球领先的互联网科技公司,核心业务涵盖电子商务(淘宝、天猫)、云计算(阿里云)、金融科技(支付宝)、物流(菜鸟网络)及创新业务(如盒马鲜生、阿里健康等)。阿里巴巴致力于通过数字化技术赋能企业与消费者,推动全球商业变革,旗下拥有
```
### Evaluate the model
pip3 install lm-eval
```bash
auto-round-eval --model "Intel/Qwen3-8B-int4-AutoRound-gptq-inc" --eval_bs 16 --tasks leaderboard_ifeval,leaderboard_mmlu_pro,gsm8k,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu,cmmlu,ceval-valid
```
| Metric | BF16 | INT4(best) | INT4(default) |
| :----------------------------------------- | :----: | :----: | :----: |
| Avg | 0.6184 | 0.6123 | 0.6063 |
| arc_easy | 0.8342 | 0.8295 | 0.8224 |
| arc_challenge | 0.5418 | 0.5496 | 0.5418 |
| boolq | 0.8673 | 0.8673 | 0.8654 |
| ceval-valid | 0.7912 | 0.7786 | 0.7741 |
| cmmlu | 0.7702 | 0.7588 | 0.7527 |
| gsm8k 5 shots | 0.8810 | 0.8643 | 0.8688 |
| hellaswag | 0.5708 | 0.5626 | 0.5615 |
| lambada_openai | 0.6400 | 0.6387 | 0.6305 |
| leaderboard_mmlu_pro 5 shots | 0.4759 | 0.4687 | 0.4676 |
| leaderboard_ifeval inst_level_strict_acc | 0.3957 | 0.3957 | 0.3789 |
| leaderboard_ifeval prompt_level_strict_acc | 0.2532 | 0.2477 | 0.2200 |
| mmlu | 0.7294 | 0.7209 | 0.7168 |
| openbookqa | 0.3140 | 0.3120 | 0.8654 |
| piqa | 0.7666 | 0.7628 | 0.7633 |
| truthfulqa_mc1 | 0.3672 | 0.3574 | 0.3550 |
| winogrande | 0.6811 | 0.6827 | 0.6803 |
### Generate the model
Here is the sample command to generate the model.
```bash
auto-round-best \
--model Qwen/Qwen3-8B \
--device 0 \
--group_size 128 \
--bits 4 \
--format 'auto_gptq' \
--output_dir "./tmp_autoround"
```
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round) |
YangXiao-nlp/LIMOPro-LIMO-P | YangXiao-nlp | 2025-05-26T04:24:32Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"question-answering",
"en",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"region:us"
]
| question-answering | 2025-05-26T02:51:46Z | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
base_model:
- Qwen/Qwen2.5-32B-Instruct
pipeline_tag: question-answering
---
# LIMOPro: Reasoning Refinement for Efficient and Effective Test-time Scaling
<div align="center">
[](https://github.com/GAIR-NLP/LIMOPro.git)
[](https://arxiv.org/abs/xxxx.xxxxx)
</div>

<!-- [](https://opensource.org/licenses/MIT)
[](https://arxiv.org/abs/2405.XXXXX) -->
<!-- Official implementation of ["EfficientLIMO: Reasoning Refinement for Efficient and Effective Test-time Scaling"](https://arxiv.org/abs/2405.XXXXX) -->
## Introduction
Large Language Models (LLMs) have demonstrated impressive reasoning abilities through chain-of-thought (CoT) approaches, particularly when fine-tuned on high-quality reasoning data from more powerful Large Reasoning Models (LRMs). However, reasoning chains distilled from LRMs often contain numerous functional elements that, while mimicking human problem-solving processes, result in unnecessarily verbose outputs.
LIMOPro introduces **PIR (Perplexity-based Importance Refinement)**, a novel framework that systematically refines reasoning chains to optimize the balance between efficiency and effectiveness. Our approach:
1. Classifies functional patterns in reasoning chains into four distinct modes: progressive reasoning and three types of functional steps (verification, multi-method validation, and error correction)
2. Quantitatively measures each functional step's contribution using the PIR metric, which evaluates answer perplexity changes when specific steps are removed
3. Selectively removes low-importance functional steps while preserving the essential progressive reasoning chain
Models fine-tuned on PIR-optimized datasets maintain or enhance accuracy while significantly reducing response length compared to models trained on unrefined data, achieving up to 55% efficiency improvement across challenging reasoning benchmarks.
## Key Features
- **PIR Framework**: A novel perplexity-based approach for quantifying reasoning step importance
- **Reasoning Pattern Analysis**: Systematic methodology to classify and understand functional elements in reasoning chains
- **Efficient Fine-tuning**: Create optimized training datasets that preserve reasoning quality while reducing verbosity
- **Improved Inference Performance**: Balance accuracy and efficiency in reasoning-enhanced LLMs
## Installation
```bash
# Clone the repository
git clone https://github.com/GAIR-NLP/LIMOPro.git
cd LIMOPro
# Install dependencies
conda create -n beyondlimo python=3.10
conda activate beyondlimo
pip install -r requirements.txt
```
modify the parameters/config of your machine in `util/config.sh` and `util/config.py` file
## Data Directory Structure
The `data` directory is organized into several key subdirectories, each serving a specific purpose in the PIR (Perplexity-based Importance Refinement) framework:
### original_data
This directory contains the raw, unmodified datasets that serve as the foundation for our work:
- **LIMO**: Original reasoning chains distilled from DeepSeek-R1
- **LIMO-V2**: Original reasoning chains distilled from QwQ
- **S1**: Original reasoning chains distilled from Gemini Flash Thinking
These datasets represent the verbose reasoning chains produced by Large Reasoning Models (LRMs) before any optimization.
### structure
This directory contains the analytical components of our work:
- **Step classification**: Categorization of each reasoning step into the four distinct modes (progressive reasoning, verification, multi-method validation, and error correction)
- **Step divisions**: The segmentation of complete reasoning chains into discrete steps for analysis
- **PIR scores**: The calculated perplexity-based importance values for each functional step, which quantify how critical each step is to the final answer
This represents the core analytical work of identifying which steps are essential versus which can be safely removed.
### pruning
This directory contains the optimized datasets after applying the PIR framework:
- Different versions of the datasets with varying pruning ratios
- Each pruned dataset represents a different efficiency-effectiveness tradeoff
- These are the refined datasets used for fine-tuning models with improved efficiency
### meta
This contains metadata about the datasets
## Training
To ensure a fair comparison between original and PIR-refined models, all training was conducted using [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), a standardized framework for fine-tuning large language models.
Since LIMO is one of our primary baselines, we used identical training scripts from the [LIMO repository](https://github.com/GAIR-NLP/LIMO.git) to ensure fair comparisons. This consistent methodology guarantees that any performance improvements observed in our experiments can be directly attributed to our PIR-refined datasets. When apply PIR to S1, we follow the same training parameters as reported in the [S1 repository](https://github.com/simplescaling/s1).
### Training Commands
```bash
### model
model_name_or_path: Qwen/Qwen2.5-32B-Instruct
### method
stage: sft
do_train: true
finetuning_type: full
deepspeed: examples/deepspeed/ds_z3_config.json
flash_attn: fa2
### dataset
dataset: <the pruned dataset>
cutoff_len: 16384
overwrite_cache: true
preprocessing_num_workers: 64
template: qwen
### output
output_dir: <custom your own path>
logging_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 1
learning_rate: 5.0e-6
num_train_epochs: 15
lr_scheduler_type: cosine
warmup_ratio: 0.0
bf16: true
ddp_timeout: 180000000
```
note: the data need to be converted to the format required by the [LLaMA-Factory](https://llamafactory.readthedocs.io/en/latest/getting_started/data_preparation.html), and modify the [dataset_info.json](https://github.com/hiyouga/LLaMA-Factory/blob/main/data/README.md) file in the [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory/blob/main/data/README.md) repo.
## Inference
For evaluation and inference, we provide easy-to-use scripts that allow you to test models trained on both original and PIR-refined datasets.
### Quick Start
```bash
# put the to be tested dataset in the location of inference/data/
# Run inference with a single command
bash inference/inference.sh MODEL_NAME NUM_GPUS FILE_ID_MIN FILE_ID_MAX DATA_NAME MODEL_PATH SAMPLING_TIMES
```
### Parameters
- `MODEL_NAME`: The name of the model
- `NUM_GPUS`: Number of GPUs to use for inference
- `FILE_ID_MIN`: Starting file ID for batch processing
- `FILE_ID_MAX`: Ending file ID for batch processing
- `DATA_NAME`: Name of the dataset to evaluate on (e.g., "gsm8k", "aime", "amcmath", "gpqa")
- `MODEL_PATH`: The path to your model to be tested
- `SAMPLING_TIMES`: The total test times for a single question, used as the n to calculate pass@1.
### Example Usage
```bash
bash inference/inference.sh limo 8 0 200 gpqa_diamond /pathy/to/your/model 8
```
### Output
The inference results are saved in JSON format in the inference/data/
## Evaluation
We provide comprehensive evaluation scripts to assess model performance across various reasoning benchmarks. The evaluation pipeline measures accuracy, token count.
### Quick Start
```bash
# Run evaluation with a single command
bash eval.sh FILE_ID_MIN FILE_ID_MAX DATA_NAME MODEL_NAME SAMPLING_TIMES
```
### Parameters
- `FILE_ID_MIN`: Starting file ID for batch evaluation
- `FILE_ID_MAX`: Ending file ID for batch evaluation
- `DATA_NAME`: Name of the benchmark dataset (e.g., "gsm8k", "aime", "amcmath", "gpqa")
- `MODEL_NAME`: Path to the model checkpoint or HuggingFace model ID
- `SAMPLING_TIMES`: Number of sampling iterations for robust evaluation
### Example Usage
```bash
# Evaluate the baseline model on AIME benchmark
bash eval.sh 0 30 aime limo 8
```
## Results
| Model | AIME | | | AMC | | | GPQA Diamond | | |
|---|---|---|---|---|---|---|---|---|---|
| | ACC ↑ | TOK ↓ | EFF ↑ | ACC ↑ | TOK ↓ | EFF ↑ | ACC ↑ | TOK ↓ | EFF ↑ |
| Qwen2.5-32B-Instruct | 15.8 | 954 | 1.66E-04 | 67.2 | 737 | 9.11E-04 | 47.0 | 517 | 9.08E-04 |
| R1-Distill-Qwen-32B | 69.2 | 9,311 | 7.43E-05 | 94.4 | 5,561 | 1.70E-04 | 64.7 | 5,634 | 1.15E-04 |
| QwQ | 81.7 | 12,234 | 6.68E-05 | 97.8 | 7,350 | 1.33E-04 | 70.2 | 7,483 | 9.38E-05 |
| S1-32B | 37.9 | 6,646 | 5.71E-05 | 80.9 | 4,542 | 1.78E-04 | 60.7 | 4,172 | 1.46E-04 |
| S1-32B-P | **42.1**<sub>+4.2</sub> | **4,716**<sub>-29%</sub> | **8.92E-05**<sub>+56%</sub> | **83.1**<sub>+2.2</sub> | **3,809**<sub>-16%</sub> | **2.18E-04**<sub>+22%</sub> | **61.6**<sub>+0.9</sub> | **2,472**<sub>-41%</sub> | **2.49E-04**<sub>+71%</sub> |
| LIMO | 56.7 | 12,497 | 4.53E-05 | 91.9 | 5,516 | 1.67E-04 | 67.2 | 7,173 | 9.36E-05 |
| LIMO-P | **63.3**<sub>+6.6</sub> | **10,588**<sub>-15%</sub> | **5.98E-05**<sub>+32%</sub> | **93.8**<sub>+1.9</sub> | **5,235**<sub>-5%</sub> | **1.79E-04**<sub>+7%</sub> | **71.2**<sub>+4</sub> | **6,969**<sub>-3%</sub> | **1.02E-04**<sub>+9%</sub> |
| LIMO-V2 | 66.3 | 13,896 | 4.77E-05 | 94.4 | 6,843 | 1.38E-04 | 70.2 | 8,035 | 8.74E-05 |
| LIMO-V2-P | **71.2**<sub>+4.9</sub> | **12,163**<sub>-12%</sub> | **5.86E-05**<sub>+23%</sub> | **96.6**<sub>+2.2</sub> | **6,348**<sub>-7%</sub> | **1.52E-04**<sub>+10%</sub> | **74.2**<sub>+3</sub> | **6,968**<sub>-13%</sub> | **1.07E-04**<sub>+22%</sub> |
## link to our model
- LIMO-P: [🤗 HuggingFace](https://huggingface.co/datasets/YangXiao-nlp/DynToM)
- LIMO-V2-P: [🤗 HuggingFace](https://huggingface.co/datasets/YangXiao-nlp/DynToM)
- S1-32B-P: [🤗 HuggingFace](https://huggingface.co/datasets/YangXiao-nlp/DynToM)
## link to our dataset
- LIMO-P: [🤗 HuggingFace](https://huggingface.co/datasets/YangXiao-nlp/DynToM)
- LIMO-V2-P: [🤗 HuggingFace](https://huggingface.co/datasets/YangXiao-nlp/DynToM)
- S1-32B-P: [🤗 HuggingFace](https://huggingface.co/datasets/YangXiao-nlp/DynToM)
<!-- ## Citation
```bibtex
@article{
title={EfficientLIMO: Reasoning Refinement for Efficient and Effective Test-time Scaling},
author={[AUTHORS]},
journal={arXiv preprint arXiv:2405.XXXXX},
year={2024}
}
``` -->
<!-- ## Contributing
We welcome contributions to improve EfficientLIMO! Please feel free to submit a Pull Request. -->
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
<!-- ## Acknowledgements
[ACKNOWLEDGEMENTS PLACEHOLDER] --> |
mradermacher/gpt2-horror-stories-GGUF | mradermacher | 2025-05-26T04:20:59Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:abbas/gpt2-horror-stories",
"base_model:quantized:abbas/gpt2-horror-stories",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-25T02:27:22Z | ---
base_model: abbas/gpt2-horror-stories
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/abbas/gpt2-horror-stories
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/gpt2-horror-stories-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gpt2-horror-stories-GGUF/resolve/main/gpt2-horror-stories.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-horror-stories-GGUF/resolve/main/gpt2-horror-stories.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-horror-stories-GGUF/resolve/main/gpt2-horror-stories.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-horror-stories-GGUF/resolve/main/gpt2-horror-stories.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-horror-stories-GGUF/resolve/main/gpt2-horror-stories.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt2-horror-stories-GGUF/resolve/main/gpt2-horror-stories.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-horror-stories-GGUF/resolve/main/gpt2-horror-stories.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt2-horror-stories-GGUF/resolve/main/gpt2-horror-stories.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-horror-stories-GGUF/resolve/main/gpt2-horror-stories.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-horror-stories-GGUF/resolve/main/gpt2-horror-stories.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-horror-stories-GGUF/resolve/main/gpt2-horror-stories.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-horror-stories-GGUF/resolve/main/gpt2-horror-stories.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Intel/Qwen3-14B-int4-AutoRound-gptq-inc | Intel | 2025-05-26T04:19:12Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"dataset:NeelNanda/pile-10k",
"arxiv:2309.05516",
"base_model:Qwen/Qwen3-14B",
"base_model:quantized:Qwen/Qwen3-14B",
"license:apache-2.0",
"4-bit",
"gptq",
"region:us"
]
| null | 2025-05-26T02:56:40Z | ---
license: apache-2.0
datasets:
- NeelNanda/pile-10k
base_model:
- Qwen/Qwen3-14B
---
## Model Details
This model is an int4 model with group_size 128 and symmetric quantization of [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) generated by [intel/auto-round](https://github.com/intel/auto-round).
## How To Use
### INT4 Inference(CPU/CUDA/INTEL GPU)
```python
from transformers import AutoModelForCausalLM,AutoTokenizer
quantized_model_dir = "Intel/Qwen3-14B-int4-AutoRound-gptq-inc"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)
model = AutoModelForCausalLM.from_pretrained(
quantized_model_dir,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512, ##change this to align with the official usage
do_sample=False ##change this to align with the official usage
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
##INT4:
# thinking content: <think>
# Okay, the user wants a short introduction to large language models. Let me start by defining what they are. I should mention that they're AI systems trained on vast amounts of text data. Maybe explain their capabilities, like understanding and generating human-like text. I need to cover different applications, such as answering questions, writing stories, coding, etc. Also, it's important to note their training process, using deep learning techniques like transformers. I should mention their ability to handle multiple languages and adapt to various tasks. But I should keep it concise, so avoid too much technical jargon. Maybe end with their impact on technology and industries. Let me check if I'm missing anything. Oh, maybe mention some examples like GPT, BERT, or other models. But since the user asked for a short intro, maybe just refer to them as examples without going into detail. Alright, that should cover the basics without being too lengthy.
# </think>
# content: A **large language model (LLM)** is an advanced artificial intelligence system trained on vast amounts of text data to understand and generate human-like text. These models use deep learning techniques, such as transformer architectures, to process and analyze language patterns, enabling them to perform tasks like answering questions, writing stories, coding, summarizing text, and more. LLMs excel at handling multiple languages, adapting to diverse contexts, and producing coherent, context-aware responses. They power applications ranging from chatbots and virtual assistants to content creation tools and research aids, revolutionizing how humans interact with technology. Examples include models like GPT, BERT, and others developed by companies like OpenAI, Google, and Meta.
##BF16:
# thinking content: <think>
# Okay, the user wants a short introduction to large language models. Let me start by defining what they are. I should mention that they're AI models trained on vast amounts of text data. Maybe explain their capabilities, like generating text, answering questions, and understanding context.
#
# I need to highlight their applications, such as in chatbots, content creation, and research. Also, it's important to note their strengths, like handling multiple languages and complex tasks. But I shouldn't forget to mention the challenges, like data bias and computational needs. Keep it concise but informative. Make sure the language is clear and not too technical. Let me check if I covered the key points without getting too detailed. Alright, that should work.
# </think>
# content: A **large language model (LLM)** is an advanced artificial intelligence system trained on vast amounts of text data to understand and generate human-like language. These models excel at tasks like answering questions, writing text, translating languages, coding, and more, by recognizing patterns and context in the input they receive. Powered by deep learning and massive computational resources, LLMs like GPT, BERT, or LLaMA can process and produce coherent, contextually relevant responses across diverse domains. They are widely used in chatbots, content creation, research, and automation, though challenges like bias, data privacy, and ethical use remain critical areas of focus.
prompt = "9.11和9.8哪个数字大"
##INT4:
# thinking content: <think>
# 嗯,用户问的是9.11和9.8哪个数字大。首先,我需要确认这两个数字的数值大小。看起来都是小数,但可能有不同的位数。首先,我应该比较它们的整数部分。9.11的整数部分是9,而9.8的整数部分也是9,所以整数部分相同,需要比较小数部分。
# 接下来,比较小数部分。9.11的小数部分是0.11,而9.8的小数部分是0.8。这时候,可能需要将它们转换成相同的位数来比较,或者直接比较数值。0.8等于0.80,而0.11显然比0.80小,所以9.8应该更大。
# 不过,用户可能对小数点后的位数有疑问,比如是否需要补零。比如,9.8可以看作9.80,这样比较的话,0.80确实比0.11大。所以结论是9.8更大。
# 不过,也有可能用户有其他考虑,比如是否在特定的上下文中,比如日期或者其他含义,但问题中没有提到,所以应该按数值比较。另外,检查是否有输入错误,比如9.11是否是9.11还是其他形式,但看起来是正确的。
# 再想一下,有没有可能用户混淆了小数点后的位数,比如9.11是两位小数,而9.8是一位,但比较时应该按数值大小,而不是位数。所以,不管位数多少,只要数值更大就行。0.8比0.11大,所以9.8更大。
# 确认一下,9.8等于9.80,而9.11是9.11,所以9.80确实比9.11大。因此,答案应该是9.8更大。
# </think>
# content: 9.8 比 9.11 大。
# **比较过程:**
# 1. **整数部分**:两者整数部分均为 **9**,相同。
# 2. **小数部分**:
# - 9.11 的小数部分为 **0.11**。
# - 9.8 的小数部分为 **0.8**(可视为 **0.80** 以统一位数)。
# 3. **数值比较**:
# $0.
##BF16:
# content: <think>
# 嗯,用户问的是9.11和9.8哪个数字大。首先,我需要确认用户的问题是否正确,因为这两个数字看起来像是小数,但可能有不同的含义。比如,9.11可能是指9月11日,而9.8可能是指9月8日,或者它们都是小数,比如9.11和9.80。不过,用户的问题里没有提到日期,所以可能只是单纯的数值比较。
# 首先,我应该先假设用户是想比较两个小数的大小。这时候,我需要回忆一下小数比较的方法。比较小数时,首先比较整数部分,如果整数部分相同,再比较小数部分。这里两个数的整数部分都是9,所以接下来比较小数部分。
# 9.11的小数部分是0.11,而9.8的小数部分是0.8。这时候,我需要比较0.11和0.8的大小。通常,比较小数时,可以将它们转换成同位数,比如0.11和0.80,这样比较的话,0.80显然比0.11大,所以9.8比9.11大。
# 不过,也有可能用户有其他的意图,比如9.11指的是9月11日,而9.8指的是9月8日,这时候比较日期的话,9月11日显然比9月8日晚,所以9.11更大。但用户的问题里没有提到日期,所以可能还是数值比较更合理。
# 不过,也有可能用户输入有误,比如9.11和9.8是否都是小数,或者有没有其他符号。比如,9.11可能是一个版本号,而9.8是另一个版本,这时候可能需要看具体的上下文。但如果没有更多上下文,应该按照数值比较来处理。
# 再检查一下,如果用户输入的是9.11和9.8,那么作为小数的话,9.8等于9.80,而9.11是9.11,所以9.80比9.11大。因此,答案应该是9.8更大。
# 不过,也有可能用户有拼写错误,比如9.11是否是9.11,而9.8是否是9.8,或者有没有其他数字。但根据问题中的写法,应该是正确的。
prompt = "How many r in word strawberry"
##INT4:
# thinking content: <think>
# Okay, let's see. The user is asking how many times the letter 'r' appears in the word "strawberry". Hmm, first I need to make sure I have the correct spelling of the word. Strawberry is spelled S-T-R-A-W-B-E-R-R-Y. Let me write that out to visualize it better: S, T, R, A, W, B, E, R, R, Y.
# Now, I need to count each occurrence of the letter 'r'. Let's go through each letter one by one. The first letter is S, not an R. The second is T, also not. The third is R, that's one. Then A, W, B, E – none of those are R. The next letter is R again, that's two. Then another R, that's three. The last letter is Y. So in total, there are three R's in "strawberry".
# Wait, let me double-check to make sure I didn't miss any. Let's count again: S (1), T (2), R (3), A (4), W (5), B (6), E (7), R (8), R (9), Y (10). So positions 3, 8, and 9 are R's. That's three times. Yeah, that seems right. I don't think I made a mistake here. The word has three R's.
# </think>
# content: The word **"strawberry"** contains **3** instances of the letter **'r'**.
# **Breakdown:**
# S-T-**R**-A-W-B-E-**R**-**R**-Y
# Positions 3, 8, and 9 are 'r's.
# **Answer:** 3.
##BF16:
# thinking content: <think>
# Okay, let's see. The user is asking how many times the letter "r" appears in the word "strawberry." Hmm, first I need to make sure I have the correct spelling of the word. Strawberry is spelled S-T-R-A-W-B-E-R-R-Y. Let me write that out to visualize it better: S, T, R, A, W, B, E, R, R, Y.
# Now, I need to count each occurrence of the letter "r." Let's go through each letter one by one. Starting with the first letter, S – that's not an R. Next is T – also not an R. The third letter is R, so that's one. Then A, W, B, E – none of those are R. The next letter is R again, that's the second one. Then another R right after that, making it the third R. Finally, Y at the end.
# Wait, let me check again to make sure I didn't miss any. The letters are S, T, R, A, W, B, E, R, R, Y. So positions 3, 8, and 9 are R's. That's three R's in total. But wait, sometimes people might miscount if they skip a letter or double-count. Let me count once more: first R is the third letter, then after E comes R, then another R. Yes, that's three.
# I think that's correct. So the answer should be 3. But just to be absolutely sure, maybe I can write out the word and underline each R. S-T-R-A-W-B-E-R-R-Y. Underlining the R's: the third letter, then the eighth and ninth. Yep, three R's. Okay, I'm confident now.
# </think>
# content: The word **"strawberry"** contains **3** instances of the letter **"r"**.
# **Breakdown:**
# S-T-**R**-A-W-B-E-**R**-**R**-Y
# Positions 3, 8, and 9 are all "r".
# **Answer:** 3.
prompt = "请简短介绍一下阿里巴巴公司"
##INT4:
thinking content: <think>
# 好的,用户让我简短介绍一下阿里巴巴公司。首先,我需要确定用户的需求是什么。可能他们需要一个快速了解阿里巴巴的概述,比如成立时间、主要业务、创始人等基本信息。
# 接下来,我要确保信息准确。阿里巴巴成立于1999年,由马云和他的团队创立,总部在杭州。主要业务包括电商平台(淘宝、天猫、京东)、云计算(阿里云)、数字支付(支付宝)、物流(菜鸟网络)等。这些都是关键点,不能遗漏。
# 然后,用户可能想知道阿里巴巴的影响力。作为全球最大的电商平台之一,阿里云在云计算领域排名靠前,支付宝改变了中国的支付方式,这些都能体现其重要性。另外,提到国际化战略,比如Lazada、阿里国际站,显示其全球布局。
# 还要注意用户可能的深层需求。他们可能是在做报告、写文章,或者只是好奇。所以信息要简洁明了,结构清晰,分点列出可能更好,但用户要求简短,所以需要整合成一段。
# 需要避免使用专业术语,保持口语化,但因为是公司介绍,可能需要稍微正式一点。同时,检查是否有最新的数据,比如成立时间是否正确,是否还有其他重要业务需要提及,比如蚂蚁集团,但可能用户只需要主要部分。
# 最后,确保没有错误,比如成立时间、创始人名字、总部地点等。确认后,组织语言,保持简短,覆盖关键点,不超过两段,每段不要太长。
# </think>
# content: 阿里巴巴集团(Alibaba Group)成立于1999年,总部位于中国杭州,由马云及其团队创立。公司最初以B2B电商平台起家,现已成为全球领先的互联网科技企业,业务涵盖电子商务(淘宝、天猫、京东)、云计算(阿里云)、数字支付(支付宝)、物流(菜鸟网络)、金融科技(蚂蚁集团)及国际化业务(如Lazada、阿里国际站)等。阿里巴巴通过技术创新和生态布局,深刻影响了全球数字经济的发展。
##BF16:
# thinking content: <think>
# 嗯,用户让我简短介绍一下阿里巴巴公司。首先,我需要确定用户的需求是什么。可能他们只是想了解阿里巴巴的基本信息,或者是在做作业、写报告,或者只是好奇。不管怎样,用户需要的是简洁明了的回答,不能太冗长。
# 接下来,我得回忆一下阿里巴巴的基本信息。阿里巴巴集团成立于1999年,总部在杭州,对吧?创始人是马云,不过现在可能已经卸任了。阿里巴巴主要业务包括电商平台,比如淘宝、天猫,还有菜鸟网络、支付宝、阿里云这些子公司。可能还需要提到它的全球影响力,比如在电子商务、云计算、金融科技方面的成就。
# 用户可能还想知道阿里巴巴的规模,比如员工数量、市值,或者它在行业中的地位。不过用户要求简短,所以这些细节可能需要简化。另外,可能需要提到它的使命,比如让天下没有难做的生意,或者推动数字化转型。
# 还要注意避免错误信息,比如马云是否还在任。我记得他已经在2019年卸任董事局主席,现在是董事会成员。另外,阿里巴巴的上市时间是2014年,港交所和纽交所都有。可能需要提到这些关键点。
# 用户可能没有说出来的深层需求是希望了解阿里巴巴的核心业务和影响力,而不仅仅是表面信息。所以需要突出其作为全球最大的电商平台之一,以及在云计算和金融科技领域的领先地位。同时,可能还要提到其社会责任,比如环保、公益项目等。
# 需要确保回答结构清晰,分点列出,但用户要求简短,所以可能用一两句话概括。比如成立时间、总部、创始人、主要业务、全球地位等。还要注意用词准确,比如“集团”而不是“公司”,因为阿里巴巴有多个子公司。
# 最后检查有没有遗漏的重要信息,比如是否提到支付宝和阿里云,这些是阿里巴巴的重要组成部分。同时,可能需要提到其对中国经济和全球电商的影响,但保持简洁。确保没有错误,比如成立年份、上市地点等。这样用户就能得到一个全面但简短的介绍。
# </think>
# content: 阿里巴巴集团(Alibaba Group)成立于1999年,总部位于中国杭州,由马云创立。它是全球领先的互联网科技公司,核心业务涵盖电子商务(淘宝、天猫)、云计算(阿里云)、金融科技(支付宝)、物流(菜鸟网络)及创新业务(如盒马鲜生、阿里健康等)。阿里巴巴致力于通过数字化技术赋能企业与消费者,推动全球商业变革,旗下拥有
```
### Evaluate the model
pip3 install lm-eval
```bash
auto-round-eval --model "Intel/Qwen3-14B-int4-AutoRound-gptq-inc" --eval_bs 16 --tasks leaderboard_ifeval,leaderboard_mmlu_pro,gsm8k,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu,cmmlu,ceval-valid
```
| Metric | BF16 | INT4(best) | INT4(default) |
| :----------------------------------------- | :----: | :----: | :----: |
| Avg | 0.6491 | 0.6484 | 0.6467 |
| arc_easy | 0.8409 | 0.8367 | 0.8396 |
| arc_challenge | 0.5845 | 0.5845 | 0.5776 |
| boolq | 0.8933 | 0.8917 | 0.8954 |
| ceval-valid | 0.8210 | 0.8217 | 0.8098 |
| cmmlu | 0.8020 | 0.7951 | 0.7942 |
| gsm8k 5 shots | 0.8832 | 0.8908 | 0.8863 |
| hellaswag | 0.6095 | 0.6035 | 0.6030 |
| lambada_openai | 0.6773 | 0.6788 | 0.6761 |
| leaderboard_mmlu_pro 5 shots | 0.5322 | 0.5281 | 0.5289 |
| leaderboard_ifeval inst_level_strict_acc | 0.4173 | 0.4245 | 0.4269 |
| leaderboard_ifeval prompt_level_strict_acc | 0.2717 | 0.2699 | 0.2736 |
| mmlu | 0.7714 | 0.7671 | 0.7671 |
| openbookqa | 0.3500 | 0.3440 | 0.3420 |
| piqa | 0.7992 | 0.7960 | 0.7971 |
| truthfulqa_mc1 | 0.4027 | 0.4064 | 0.4027 |
| winogrande | 0.7285 | 0.7348 | 0.7269 |
### Generate the model
Here is the sample command to generate the model.
```bash
auto-round-best \
--model Qwen/Qwen3-14B \
--device 0 \
--group_size 128 \
--bits 4 \
--format 'auto_gptq' \
--output_dir "./tmp_autoround"
```
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round) |
313707021-TING/qwen2.5-llm-reasoning | 313707021-TING | 2025-05-26T04:18:01Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T04:06:59Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-7B-Instruct
--- |
SaoSamarth/openai-whisper-small-Khmer-dynamo-one | SaoSamarth | 2025-05-26T04:17:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T04:17:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wuxia196/ppo-2m-LunarLander-v2 | wuxia196 | 2025-05-26T04:16:26Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-26T04:16:08Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 283.57 +/- 17.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mci29/sn29_s3m0_gzzn | mci29 | 2025-05-26T04:10:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T04:05:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
test-gen/qwen2-3b-easy-unique_lr1e-5 | test-gen | 2025-05-26T04:09:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2025-05-26T03:57:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Vortex5/LuckyRP-24B | Vortex5 | 2025-05-26T04:03:28Z | 0 | 0 | null | [
"safetensors",
"mistral",
"merge",
"mergekit",
"roleplay",
"storytelling",
"base_model:cognitivecomputations/Dolphin3.0-Mistral-24B",
"base_model:merge:cognitivecomputations/Dolphin3.0-Mistral-24B",
"base_model:trashpanda-org/MS-24B-Mullein-v0",
"base_model:merge:trashpanda-org/MS-24B-Mullein-v0",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T01:55:10Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- roleplay
- storytelling
base_model:
- trashpanda-org/MS-24B-Mullein-v0
- cognitivecomputations/Dolphin3.0-Mistral-24B
---
# LuckyRP-24B
LuckyRP-24B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [trashpanda-org/MS-24B-Mullein-v0](https://huggingface.co/trashpanda-org/MS-24B-Mullein-v0)
* [cognitivecomputations/Dolphin3.0-Mistral-24B](https://huggingface.co/cognitivecomputations/Dolphin3.0-Mistral-24B)

## Configuration:
The following YAML configuration was used to produce this model:
```merge_method: slerp
models:
- model: trashpanda-org/MS-24B-Mullein-v0
parameters:
weight: 0.7
- model: cognitivecomputations/Dolphin3.0-Mistral-24B
parameters:
weight: 0.3
base_model: trashpanda-org/MS-24B-Mullein-v0
tokenizer:
source: base
parameters:
t: 0.3
normalize: true
dtype: bfloat16
out_dtype: bfloat16
``` |
ajinkyapuar/nanoVLM | ajinkyapuar | 2025-05-26T04:00:33Z | 0 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
]
| image-text-to-text | 2025-05-26T03:59:34Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("ajinkyapuar/nanoVLM")
```
|
mradermacher/gladiusprompt-vith-gpt2-i1-GGUF | mradermacher | 2025-05-26T04:00:09Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:RomeroRZ/gladiusprompt-vith-gpt2",
"base_model:quantized:RomeroRZ/gladiusprompt-vith-gpt2",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-05-26T03:29:01Z | ---
base_model: RomeroRZ/gladiusprompt-vith-gpt2
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/RomeroRZ/gladiusprompt-vith-gpt2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF/resolve/main/gladiusprompt-vith-gpt2.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
adhitia17/idmt | adhitia17 | 2025-05-26T03:58:04Z | 0 | 0 | null | [
"safetensors",
"t5",
"idmt",
"id",
"dataset:facebook/empathetic_dialogues",
"base_model:muchad/idt5-base",
"base_model:finetune:muchad/idt5-base",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-23T09:17:06Z | ---
license: apache-2.0
datasets:
- facebook/empathetic_dialogues
language:
- id
metrics:
- bleu
- rouge
- accuracy
- f1
base_model:
- muchad/idt5-base
tags:
- idmt
---
# Indonesian Multitask Text Generation and Emotion Classification
This model provides a new refresher in the field of emotion-aware dialogue systems in Indonesian by creating the Indonesian Empathetic Dialogue Dataset and conducting multitask text generation and emotion classification training using pretrained idT5
## Model Details
### Model Description
- **Developed by:** Adhitia Erfina, Tran Thi Oanh, Le-Hong Phuong
- **Funded by:** xxxxxxxxxxxxxxxxxxx
- **Model type:** Multitask Text Generation and Emotion Classification
- **Language(s) (NLP):** Indonesia
- **Finetuned from model:** muchad/idt5-base
### Model Sources
- **Repository:** https://github.com/adhitia17/Multitask-Generative-Dialogue-and-Emotion-Classification-with-Indonesian-Empathetic-Dialogue-Dataset
- **Paper:** xxxxxxxxxxxxxxxxxxx
## Uses
This model is designed for multitask text-to-text generation in Indonesian, specifically trained for:
1. Dialogue Response Generation: Given a user utterance prefixed with dialog:, the model generates a relevant conversational response.
2. Emotion Classification: Given a text prefixed with emosi:, the model predicts the underlying emotion expressed in the text.
3. Context Understanding/Summarization (if applicable based on your training data): Given a text prefixed with konteks:, the model can perform tasks related to understanding or summarizing the provided context.
It's intended to be used directly via the transformers library in Python for applications requiring these specific capabilities in Indonesian.
### Direct Use
```
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
repo_id = "adhitia17/idmt"
print(f"Loading tokenizer and model from {repo_id}...")
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForSeq2SeqLM.from_pretrained(repo_id)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
print(f"Model loaded to device: {device}")
def generate_response(input_text, task_prefix):
"""Generates a response from the model for a given task."""
full_input = f"{task_prefix}: {input_text}"
print(f"\nInput ({task_prefix}): {full_input}")
input_ids = tokenizer(full_input, return_tensors="pt").input_ids.to(device)
outputs = model.generate(
input_ids,
max_length=256,
num_beams=5,
early_stopping=True
)
decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Output: {decoded_output}")
return decoded_output
print("\nInference examples complete.")
```
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
Translated facebook/empathetic_dialogues to Indonesian (81.005 rows)
### Training Procedure
Multitask Text Generation and Emotion Classification using T5Base
#### Preprocessing
facebook/empathetic_dialogues translated to Indonesian using facebook/nllb-200-1.3B
#### Training Hyperparameters
- **Learning Rate:** 1e-5
- **Weight Decay:** 0.01
- **Token:** 512
- **Batch:** 64
- **Epochs:** 40
- **Warm Up Steps :** 500
- **Optimizer:** Adam
- **Evaluation Metrics :** BLEU + ROUGE (text generation) & Accuracy + F1 (emotion classification)
## Evaluation
Translated facebook/empathetic_dialogues to Indonesian (12.044 rows)
### Testing Data & Metrics
#### Testing Data
Translated facebook/empathetic_dialogues to Indonesian (10.945 rows)
### Results
- **BLEU:** 0.1071
- **ROUGE:** 0.2264
- **Accuracy:** 0.7064
- **F1:** 0.7049
## Technical Specifications
#### GPU
1x NVIDIA H100 with 80 GB HBM2e memory, and FP8 Tensor Core 3.958 TFLOPS.
#### Training Hours
±18 hours
## Citation
xxxxxxxxxxxxxxxxxxx |
manuross1/nbmafckdfll4k | manuross1 | 2025-05-26T03:48:50Z | 3 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-25T04:20:39Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nbmafckdfll4k
---
# Nbmafckdfll4K
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nbmafckdfll4k` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nbmafckdfll4k",
"lora_weights": "https://huggingface.co/manuross1/nbmafckdfll4k/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('manuross1/nbmafckdfll4k', weight_name='lora.safetensors')
image = pipeline('nbmafckdfll4k').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/manuross1/nbmafckdfll4k/discussions) to add images that show off what you’ve made with this LoRA.
|
mci29/sn29_q2m3_endz | mci29 | 2025-05-26T03:47:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T03:43:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/gladiusprompt-vith-gpt2-GGUF | mradermacher | 2025-05-26T03:44:47Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:RomeroRZ/gladiusprompt-vith-gpt2",
"base_model:quantized:RomeroRZ/gladiusprompt-vith-gpt2",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-25T02:19:10Z | ---
base_model: RomeroRZ/gladiusprompt-vith-gpt2
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/RomeroRZ/gladiusprompt-vith-gpt2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gladiusprompt-vith-gpt2-GGUF/resolve/main/gladiusprompt-vith-gpt2.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ArtusDev/Delta-Vector_Sol-Reaver-15B-Instruct_EXL2_4.5bpw_H6 | ArtusDev | 2025-05-26T03:39:21Z | 0 | 0 | null | [
"safetensors",
"mistral",
"roleplay",
"instruct",
"creative_writing",
"story-writing",
"exl3",
"dataset:Delta-Vector/Hydrus-Instruct-SmolTalk-V2",
"dataset:Delta-Vector/Hydrus-SonnetOrca-V2",
"dataset:Delta-Vector/Hydrus-FeedSum-ShareGPT",
"dataset:Delta-Vector/Hydrus-Tulu-Personas-Filtered-Sharegpt",
"dataset:Delta-Vector/Hydrus-No_Robots-R1-Filtered",
"dataset:Delta-Vector/Hydrus-Chat_error-Pure-Dove-sharegpt",
"dataset:Delta-Vector/Hydrus-HelpSteer2",
"dataset:Delta-Vector/Hydrus-R1-Thinking-Sharegpt",
"dataset:Delta-Vector/Hydrus-Science-QA-sharegpt",
"dataset:Delta-Vector/Hydrus-Claude-Instruct-2.7K",
"dataset:Delta-Vector/Hydrus-Claude-Instruct-5K",
"dataset:PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4",
"dataset:PocketDoc/Dans-Toolmaxx-ShellCommands",
"dataset:PocketDoc/Dans-MemoryCore-CoreCurriculum-Small",
"dataset:PocketDoc/Dans-Logicmaxx-SAT-AP",
"dataset:PocketDoc/Dans-Benchmaxx",
"dataset:Nitral-AI/ARES-ShareGPT",
"dataset:PocketDoc/Dans-Taskmaxx-TableGPT",
"dataset:Delta-Vector/Ursa-Erebus-16K",
"dataset:Delta-Vector/Ursa-Books-Light-Novels-V1",
"dataset:NewEden/Orion-LIT",
"dataset:Delta-Vector/Ursa-Asstr-V2-18k",
"dataset:Delta-Vector/Ursa-Books-V2",
"dataset:Delta-Vector/Ursa-Scribblehub-7k",
"dataset:Delta-Vector/Ursa-Orion-EA-Comp-Filtered",
"dataset:Delta-Vector/Ursa-HoneyFeed",
"dataset:Delta-Vector/Ursa-Falling-through-the-world",
"base_model:Delta-Vector/Sol-Reaver-15B-Instruct",
"base_model:quantized:Delta-Vector/Sol-Reaver-15B-Instruct",
"exl2",
"region:us"
]
| null | 2025-05-26T02:45:22Z | ---
datasets:
- Delta-Vector/Hydrus-Instruct-SmolTalk-V2
- Delta-Vector/Hydrus-SonnetOrca-V2
- Delta-Vector/Hydrus-FeedSum-ShareGPT
- Delta-Vector/Hydrus-Tulu-Personas-Filtered-Sharegpt
- Delta-Vector/Hydrus-No_Robots-R1-Filtered
- Delta-Vector/Hydrus-Chat_error-Pure-Dove-sharegpt
- Delta-Vector/Hydrus-HelpSteer2
- Delta-Vector/Hydrus-R1-Thinking-Sharegpt
- Delta-Vector/Hydrus-Science-QA-sharegpt
- Delta-Vector/Hydrus-Claude-Instruct-2.7K
- Delta-Vector/Hydrus-Claude-Instruct-5K
- PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4
- PocketDoc/Dans-Toolmaxx-ShellCommands
- PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
- PocketDoc/Dans-Logicmaxx-SAT-AP
- PocketDoc/Dans-Benchmaxx
- Nitral-AI/ARES-ShareGPT
- PocketDoc/Dans-Taskmaxx-TableGPT
- Delta-Vector/Ursa-Erebus-16K
- Delta-Vector/Ursa-Books-Light-Novels-V1
- NewEden/Orion-LIT
- Delta-Vector/Ursa-Asstr-V2-18k
- Delta-Vector/Ursa-Books-V2
- Delta-Vector/Ursa-Scribblehub-7k
- Delta-Vector/Ursa-Orion-EA-Comp-Filtered
- Delta-Vector/Ursa-HoneyFeed
- Delta-Vector/Ursa-Falling-through-the-world
base_model:
- Delta-Vector/Sol-Reaver-15B-Instruct
base_model_relation: quantized
quantized_by: ArtusDev
tags:
- roleplay
- instruct
- creative_writing
- story-writing
- mistral
- exl3
---
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Sol-Reaver 15B</title>
<link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #ffeef8 0%, #fff0e6 50%, #f8e8ff 100%);
color: #8b4a6b;
margin: 0;
padding: 0;
font-size: 16px;
min-height: 100vh;
}
.container {
margin: 20px;
background: linear-gradient(145deg, rgba(255, 255, 255, 0.9), rgba(255, 245, 250, 0.95));
padding: 30px;
border-radius: 20px;
box-shadow: 0 8px 32px rgba(255, 182, 193, 0.3), 0 4px 16px rgba(255, 215, 0, 0.2);
border: 2px solid rgba(255, 182, 193, 0.4);
position: relative;
backdrop-filter: blur(10px);
}
.container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg, rgba(255, 192, 203, 0.1), rgba(255, 215, 0, 0.1), rgba(221, 160, 221, 0.1));
border-radius: 20px;
z-index: -1;
}
.header h1 {
font-size: 32px;
background: linear-gradient(45deg, #d63384, #fd7e14, #e91e63);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
margin: 0 0 20px 0;
text-align: center;
font-weight: 600;
text-shadow: 0 2px 4px rgba(255, 182, 193, 0.3);
}
.section {
margin-top: 30px;
}
.section h2 {
font-size: 24px;
background: linear-gradient(45deg, #d63384, #fd7e14);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
text-align: center;
font-weight: 600;
margin-bottom: 20px;
}
.info p {
color: #8b4a6b;
line-height: 1.8;
font-size: 16px;
}
.info img {
width: 85%;
border-radius: 15px;
margin: 0 auto 15px;
display: block;
box-shadow: 0 8px 25px rgba(255, 182, 193, 0.4);
border: 2px solid rgba(255, 192, 203, 0.5);
}
a {
color: #d63384;
text-decoration: none;
transition: all 0.3s ease;
font-weight: 500;
}
a:hover {
color: #fd7e14;
text-shadow: 0 0 8px rgba(255, 215, 0, 0.6);
}
.button {
display: inline-block;
background: linear-gradient(45deg, #ffb6c1, #ffd700);
color: #8b4a6b;
padding: 12px 24px;
border-radius: 25px;
cursor: pointer;
text-decoration: none;
transition: all 0.3s ease;
border: 1px solid rgba(255, 182, 193, 0.5);
font-weight: 500;
}
.button:hover {
background: linear-gradient(45deg, #ff91a4, #ffed4e);
box-shadow: 0 4px 15px rgba(255, 182, 193, 0.6);
transform: translateY(-2px);
}
pre {
background: linear-gradient(135deg, rgba(255, 240, 245, 0.8), rgba(255, 248, 220, 0.8));
padding: 20px;
border-radius: 12px;
overflow-x: auto;
border: 1px solid rgba(255, 182, 193, 0.3);
box-shadow: inset 0 2px 4px rgba(255, 182, 193, 0.2);
}
code {
font-family: 'Courier New', monospace;
color: #8b4a6b;
}
.info-card {
background: linear-gradient(145deg, rgba(255, 240, 245, 0.9), rgba(255, 248, 220, 0.9));
border: 2px solid rgba(255, 182, 193, 0.4);
border-radius: 15px;
overflow: hidden;
box-shadow: 0 4px 20px rgba(255, 182, 193, 0.3);
}
.info-header {
background: linear-gradient(135deg, rgba(255, 192, 203, 0.3), rgba(255, 215, 0, 0.2));
padding: 25px;
border-bottom: 1px solid rgba(255, 182, 193, 0.3);
}
.info-header h3 {
background: linear-gradient(45deg, #d63384, #fd7e14);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
margin: 0 0 15px 0;
font-size: 22px;
text-align: center;
font-weight: 600;
}
.model-tags {
display: flex;
gap: 10px;
flex-wrap: wrap;
justify-content: center;
}
.model-tag {
background: linear-gradient(45deg, rgba(255, 182, 193, 0.4), rgba(255, 215, 0, 0.3));
color: #8b4a6b;
padding: 8px 16px;
border-radius: 20px;
font-size: 13px;
border: 1px solid rgba(255, 182, 193, 0.5);
font-weight: 500;
box-shadow: 0 2px 8px rgba(255, 182, 193, 0.2);
}
.model-composition {
padding: 25px;
border-bottom: 1px solid rgba(255, 182, 193, 0.3);
}
.model-composition h4 {
background: linear-gradient(45deg, #d63384, #fd7e14);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
margin: 0 0 20px 0;
font-size: 18px;
text-align: center;
font-weight: 600;
}
.composition-list {
list-style: none;
padding: 0;
margin: 0;
display: grid;
gap: 15px;
}
.composition-list li {
color: #8b4a6b;
display: flex;
align-items: baseline;
gap: 12px;
padding: 10px;
background: rgba(255, 240, 245, 0.5);
border-radius: 8px;
border-left: 4px solid #ffb6c1;
}
.model-component {
font-weight: 600;
min-width: 120px;
}
.model-description {
padding: 25px;
background: linear-gradient(135deg, rgba(255, 255, 255, 0.7), rgba(255, 240, 245, 0.8));
}
.metrics-section {
margin-bottom: 30px;
}
.metrics-section details {
background: linear-gradient(145deg, rgba(255, 240, 245, 0.9), rgba(255, 248, 220, 0.9));
border: 2px solid rgba(255, 182, 193, 0.4);
border-radius: 12px;
padding: 20px;
margin-bottom: 20px;
box-shadow: 0 4px 15px rgba(255, 182, 193, 0.2);
}
.metrics-section summary {
background: linear-gradient(45deg, #d63384, #fd7e14);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
font-size: 18px;
cursor: pointer;
outline: none;
padding: 8px 0;
text-align: center;
font-weight: 600;
transition: all 0.3s ease;
}
.metrics-section summary:hover {
text-shadow: 0 0 8px rgba(255, 215, 0, 0.6);
}
.creator-section {
margin: 20px 0;
text-align: center;
}
.creator-badge {
display: inline-flex;
align-items: center;
background: linear-gradient(145deg, rgba(255, 240, 245, 0.9), rgba(255, 248, 220, 0.9));
border: 2px solid rgba(255, 182, 193, 0.4);
border-radius: 25px;
padding: 15px 20px;
box-shadow: 0 4px 15px rgba(255, 182, 193, 0.3);
}
.creator-label {
color: #8b4a6b;
font-size: 14px;
margin-right: 10px;
font-weight: 500;
}
.creator-link {
display: flex;
align-items: center;
gap: 8px;
color: #d63384;
text-decoration: none;
transition: all 0.3s ease;
}
.creator-name {
font-weight: 600;
}
.creator-arrow {
font-size: 16px;
transition: transform 0.3s ease;
}
.creator-link:hover .creator-arrow {
transform: translateX(4px);
color: #fd7e14;
}
.creator-link:hover {
color: #fd7e14;
text-shadow: 0 0 8px rgba(255, 215, 0, 0.6);
}
.link-arrow {
display: inline-block;
transition: transform 0.3s ease;
}
a:hover .link-arrow {
transform: translateX(3px);
}
.axolotl-container {
display: flex;
text-align: center; /* This is correctly applied to center the image itself */
justify-content: center;
margin: 30px 0;
}
.axolotl-container img {
max-width: 300px;
border-radius: 15px;
box-shadow: 0 6px 20px rgba(255, 182, 193, 0.4);
border: 2px solid rgba(255, 192, 203, 0.5);
transition: transform 0.3s ease;
display: block; /* Make the image a block element */
margin: 0 auto; /* Center it horizontally within its parent */
}
.axolotl-container img:hover {
transform: scale(1.05);
}
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>Sol Reaver 15B</h1>
</div>
<div class="info">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/DYgyLUEaHAv9kTffBYH-F.jpeg" alt="Model banner">
<div style="text-align: center;">
<div class="creator-section">
<div class="creator-badge">
<span class="creator-label">Created by</span>
<a href="https://huggingface.co/Delta-Vector" target="_blank" class="creator-link">
<span class="creator-name">Delta-Vector</span>
<span class="creator-arrow">→</span>
</a>
</div>
</div>
<div class="model-info">
<h2>Model Information</h2>
<div class="info-card">
<div class="info-header">
<h3>Sol-Reaver-15B-Instruct</h3>
<div class="model-tags">
<span class="model-tag">15B parameters</span>
<span class="model-tag">Creative / Fresh Prose</span>
<span class="model-tag">Co-writing/Roleplay/Adventure Generalist</span>
</div>
</div>
<div class="model-description">
<p>The first in the line of a New series of Roleplay / Adventure / Co-writer Models - Finetuned ontop of Sol-Reaver-15B-Pretrain</p>
<p>This model has been trained on 200M tokens of high quality Instruct data, It's focus is to provide a base for further finetuning|Merging</p>
<p>It's goal is to have refreshing Prose, Creativity, Good Instruct following and the *Brains*.</p>
<p>Support me on Ko-Fi: https://ko-fi.com/deltavector</p>
</div>
</div>
</div>
<div class="section">
<h2>Quantized Versions</h2>
<div class="info-card">
<div class="model-composition">
<h4>Available Downloads</h4>
<ul class="composition-list">
<li><span class="model-component"><a href="" target="_blank">GGUF Format</a></span>For use with LLama.cpp & Forks(Coming Soon!)</li>
<li><span class="model-component"><a href="" target="_blank">EXL2 Format</a></span>For use with TabbyAPI (Coming Soon!)</li>
<li><span class="model-component"><a href="" target="_blank">EXL3 Format</a></span>For use with TabbyAPI (Slower on Ampere))</li>
</ul>
</div>
</div>
</div>
<div class="section">
<h2>Prompting</h2>
<p>Model has been tuned with the ChatML formatting. A typical input would look like this:</p>
<pre><code><|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
</code></pre>
</div>
<div class="section">
<h2>Samplers</h2>
<p>For testing of this model, I used Temp=1, 0.1 Min-P.</p>
<div class="metrics-section">
<details>
<summary>See Axolotl Config</summary>
<pre><code>
https://files.catbox.moe/u9dakg.yml
</code></pre>
</details>
</div>
</div>
<div class="section">
<h2>Training</h2>
<p>The training was done for 2 epoch using 8 x <a href="https://www.nvidia.com/en-us/data-center/h200/">H200s</a> GPUs graciously provided by <a href="https://huggingface.co/kalomaze">Kalomaze</a> for the fine-tuning of the model.</p>
<div class="axolotl-container">
<a href="https://github.com/OpenAccess-AI-Collective/axolotl" target="_blank">
<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl">
</a>
</div>
</div>
<div class="section">
<h2>Credits</h2>
<p>Thank you to <a href="https://huggingface.co/lucyknada">Lucy Knada</a>, <a href="https://huggingface.co/Ateron">Ateron</a>, <a href="https://huggingface.co/AliCat2">Alicat</a>, <a href="https://huggingface.co/intervitens">Intervitens</a>, <a href="https://huggingface.co/cgato">Cgato</a>, <a href="https://huggingface.co/kubernetes-bad">Kubernetes Bad</a> and the rest of <a href="https://huggingface.co/anthracite-org">Anthracite</a>.</p>
</div>
</div>
</div>
</body>
</html> |
VivekChandra/ppo-LunarLander-v2 | VivekChandra | 2025-05-26T03:37:57Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-26T03:37:40Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.43 +/- 18.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ArtusDev/Delta-Vector_Sol-Reaver-15B-Instruct_EXL2_3.5bpw_H6 | ArtusDev | 2025-05-26T03:37:40Z | 0 | 0 | null | [
"safetensors",
"mistral",
"roleplay",
"instruct",
"creative_writing",
"story-writing",
"exl3",
"dataset:Delta-Vector/Hydrus-Instruct-SmolTalk-V2",
"dataset:Delta-Vector/Hydrus-SonnetOrca-V2",
"dataset:Delta-Vector/Hydrus-FeedSum-ShareGPT",
"dataset:Delta-Vector/Hydrus-Tulu-Personas-Filtered-Sharegpt",
"dataset:Delta-Vector/Hydrus-No_Robots-R1-Filtered",
"dataset:Delta-Vector/Hydrus-Chat_error-Pure-Dove-sharegpt",
"dataset:Delta-Vector/Hydrus-HelpSteer2",
"dataset:Delta-Vector/Hydrus-R1-Thinking-Sharegpt",
"dataset:Delta-Vector/Hydrus-Science-QA-sharegpt",
"dataset:Delta-Vector/Hydrus-Claude-Instruct-2.7K",
"dataset:Delta-Vector/Hydrus-Claude-Instruct-5K",
"dataset:PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4",
"dataset:PocketDoc/Dans-Toolmaxx-ShellCommands",
"dataset:PocketDoc/Dans-MemoryCore-CoreCurriculum-Small",
"dataset:PocketDoc/Dans-Logicmaxx-SAT-AP",
"dataset:PocketDoc/Dans-Benchmaxx",
"dataset:Nitral-AI/ARES-ShareGPT",
"dataset:PocketDoc/Dans-Taskmaxx-TableGPT",
"dataset:Delta-Vector/Ursa-Erebus-16K",
"dataset:Delta-Vector/Ursa-Books-Light-Novels-V1",
"dataset:NewEden/Orion-LIT",
"dataset:Delta-Vector/Ursa-Asstr-V2-18k",
"dataset:Delta-Vector/Ursa-Books-V2",
"dataset:Delta-Vector/Ursa-Scribblehub-7k",
"dataset:Delta-Vector/Ursa-Orion-EA-Comp-Filtered",
"dataset:Delta-Vector/Ursa-HoneyFeed",
"dataset:Delta-Vector/Ursa-Falling-through-the-world",
"base_model:Delta-Vector/Sol-Reaver-15B-Instruct",
"base_model:quantized:Delta-Vector/Sol-Reaver-15B-Instruct",
"exl2",
"region:us"
]
| null | 2025-05-26T02:42:52Z | ---
datasets:
- Delta-Vector/Hydrus-Instruct-SmolTalk-V2
- Delta-Vector/Hydrus-SonnetOrca-V2
- Delta-Vector/Hydrus-FeedSum-ShareGPT
- Delta-Vector/Hydrus-Tulu-Personas-Filtered-Sharegpt
- Delta-Vector/Hydrus-No_Robots-R1-Filtered
- Delta-Vector/Hydrus-Chat_error-Pure-Dove-sharegpt
- Delta-Vector/Hydrus-HelpSteer2
- Delta-Vector/Hydrus-R1-Thinking-Sharegpt
- Delta-Vector/Hydrus-Science-QA-sharegpt
- Delta-Vector/Hydrus-Claude-Instruct-2.7K
- Delta-Vector/Hydrus-Claude-Instruct-5K
- PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4
- PocketDoc/Dans-Toolmaxx-ShellCommands
- PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
- PocketDoc/Dans-Logicmaxx-SAT-AP
- PocketDoc/Dans-Benchmaxx
- Nitral-AI/ARES-ShareGPT
- PocketDoc/Dans-Taskmaxx-TableGPT
- Delta-Vector/Ursa-Erebus-16K
- Delta-Vector/Ursa-Books-Light-Novels-V1
- NewEden/Orion-LIT
- Delta-Vector/Ursa-Asstr-V2-18k
- Delta-Vector/Ursa-Books-V2
- Delta-Vector/Ursa-Scribblehub-7k
- Delta-Vector/Ursa-Orion-EA-Comp-Filtered
- Delta-Vector/Ursa-HoneyFeed
- Delta-Vector/Ursa-Falling-through-the-world
base_model:
- Delta-Vector/Sol-Reaver-15B-Instruct
base_model_relation: quantized
quantized_by: ArtusDev
tags:
- roleplay
- instruct
- creative_writing
- story-writing
- mistral
- exl3
---
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Sol-Reaver 15B</title>
<link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #ffeef8 0%, #fff0e6 50%, #f8e8ff 100%);
color: #8b4a6b;
margin: 0;
padding: 0;
font-size: 16px;
min-height: 100vh;
}
.container {
margin: 20px;
background: linear-gradient(145deg, rgba(255, 255, 255, 0.9), rgba(255, 245, 250, 0.95));
padding: 30px;
border-radius: 20px;
box-shadow: 0 8px 32px rgba(255, 182, 193, 0.3), 0 4px 16px rgba(255, 215, 0, 0.2);
border: 2px solid rgba(255, 182, 193, 0.4);
position: relative;
backdrop-filter: blur(10px);
}
.container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg, rgba(255, 192, 203, 0.1), rgba(255, 215, 0, 0.1), rgba(221, 160, 221, 0.1));
border-radius: 20px;
z-index: -1;
}
.header h1 {
font-size: 32px;
background: linear-gradient(45deg, #d63384, #fd7e14, #e91e63);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
margin: 0 0 20px 0;
text-align: center;
font-weight: 600;
text-shadow: 0 2px 4px rgba(255, 182, 193, 0.3);
}
.section {
margin-top: 30px;
}
.section h2 {
font-size: 24px;
background: linear-gradient(45deg, #d63384, #fd7e14);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
text-align: center;
font-weight: 600;
margin-bottom: 20px;
}
.info p {
color: #8b4a6b;
line-height: 1.8;
font-size: 16px;
}
.info img {
width: 85%;
border-radius: 15px;
margin: 0 auto 15px;
display: block;
box-shadow: 0 8px 25px rgba(255, 182, 193, 0.4);
border: 2px solid rgba(255, 192, 203, 0.5);
}
a {
color: #d63384;
text-decoration: none;
transition: all 0.3s ease;
font-weight: 500;
}
a:hover {
color: #fd7e14;
text-shadow: 0 0 8px rgba(255, 215, 0, 0.6);
}
.button {
display: inline-block;
background: linear-gradient(45deg, #ffb6c1, #ffd700);
color: #8b4a6b;
padding: 12px 24px;
border-radius: 25px;
cursor: pointer;
text-decoration: none;
transition: all 0.3s ease;
border: 1px solid rgba(255, 182, 193, 0.5);
font-weight: 500;
}
.button:hover {
background: linear-gradient(45deg, #ff91a4, #ffed4e);
box-shadow: 0 4px 15px rgba(255, 182, 193, 0.6);
transform: translateY(-2px);
}
pre {
background: linear-gradient(135deg, rgba(255, 240, 245, 0.8), rgba(255, 248, 220, 0.8));
padding: 20px;
border-radius: 12px;
overflow-x: auto;
border: 1px solid rgba(255, 182, 193, 0.3);
box-shadow: inset 0 2px 4px rgba(255, 182, 193, 0.2);
}
code {
font-family: 'Courier New', monospace;
color: #8b4a6b;
}
.info-card {
background: linear-gradient(145deg, rgba(255, 240, 245, 0.9), rgba(255, 248, 220, 0.9));
border: 2px solid rgba(255, 182, 193, 0.4);
border-radius: 15px;
overflow: hidden;
box-shadow: 0 4px 20px rgba(255, 182, 193, 0.3);
}
.info-header {
background: linear-gradient(135deg, rgba(255, 192, 203, 0.3), rgba(255, 215, 0, 0.2));
padding: 25px;
border-bottom: 1px solid rgba(255, 182, 193, 0.3);
}
.info-header h3 {
background: linear-gradient(45deg, #d63384, #fd7e14);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
margin: 0 0 15px 0;
font-size: 22px;
text-align: center;
font-weight: 600;
}
.model-tags {
display: flex;
gap: 10px;
flex-wrap: wrap;
justify-content: center;
}
.model-tag {
background: linear-gradient(45deg, rgba(255, 182, 193, 0.4), rgba(255, 215, 0, 0.3));
color: #8b4a6b;
padding: 8px 16px;
border-radius: 20px;
font-size: 13px;
border: 1px solid rgba(255, 182, 193, 0.5);
font-weight: 500;
box-shadow: 0 2px 8px rgba(255, 182, 193, 0.2);
}
.model-composition {
padding: 25px;
border-bottom: 1px solid rgba(255, 182, 193, 0.3);
}
.model-composition h4 {
background: linear-gradient(45deg, #d63384, #fd7e14);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
margin: 0 0 20px 0;
font-size: 18px;
text-align: center;
font-weight: 600;
}
.composition-list {
list-style: none;
padding: 0;
margin: 0;
display: grid;
gap: 15px;
}
.composition-list li {
color: #8b4a6b;
display: flex;
align-items: baseline;
gap: 12px;
padding: 10px;
background: rgba(255, 240, 245, 0.5);
border-radius: 8px;
border-left: 4px solid #ffb6c1;
}
.model-component {
font-weight: 600;
min-width: 120px;
}
.model-description {
padding: 25px;
background: linear-gradient(135deg, rgba(255, 255, 255, 0.7), rgba(255, 240, 245, 0.8));
}
.metrics-section {
margin-bottom: 30px;
}
.metrics-section details {
background: linear-gradient(145deg, rgba(255, 240, 245, 0.9), rgba(255, 248, 220, 0.9));
border: 2px solid rgba(255, 182, 193, 0.4);
border-radius: 12px;
padding: 20px;
margin-bottom: 20px;
box-shadow: 0 4px 15px rgba(255, 182, 193, 0.2);
}
.metrics-section summary {
background: linear-gradient(45deg, #d63384, #fd7e14);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
font-size: 18px;
cursor: pointer;
outline: none;
padding: 8px 0;
text-align: center;
font-weight: 600;
transition: all 0.3s ease;
}
.metrics-section summary:hover {
text-shadow: 0 0 8px rgba(255, 215, 0, 0.6);
}
.creator-section {
margin: 20px 0;
text-align: center;
}
.creator-badge {
display: inline-flex;
align-items: center;
background: linear-gradient(145deg, rgba(255, 240, 245, 0.9), rgba(255, 248, 220, 0.9));
border: 2px solid rgba(255, 182, 193, 0.4);
border-radius: 25px;
padding: 15px 20px;
box-shadow: 0 4px 15px rgba(255, 182, 193, 0.3);
}
.creator-label {
color: #8b4a6b;
font-size: 14px;
margin-right: 10px;
font-weight: 500;
}
.creator-link {
display: flex;
align-items: center;
gap: 8px;
color: #d63384;
text-decoration: none;
transition: all 0.3s ease;
}
.creator-name {
font-weight: 600;
}
.creator-arrow {
font-size: 16px;
transition: transform 0.3s ease;
}
.creator-link:hover .creator-arrow {
transform: translateX(4px);
color: #fd7e14;
}
.creator-link:hover {
color: #fd7e14;
text-shadow: 0 0 8px rgba(255, 215, 0, 0.6);
}
.link-arrow {
display: inline-block;
transition: transform 0.3s ease;
}
a:hover .link-arrow {
transform: translateX(3px);
}
.axolotl-container {
display: flex;
text-align: center; /* This is correctly applied to center the image itself */
justify-content: center;
margin: 30px 0;
}
.axolotl-container img {
max-width: 300px;
border-radius: 15px;
box-shadow: 0 6px 20px rgba(255, 182, 193, 0.4);
border: 2px solid rgba(255, 192, 203, 0.5);
transition: transform 0.3s ease;
display: block; /* Make the image a block element */
margin: 0 auto; /* Center it horizontally within its parent */
}
.axolotl-container img:hover {
transform: scale(1.05);
}
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>Sol Reaver 15B</h1>
</div>
<div class="info">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/DYgyLUEaHAv9kTffBYH-F.jpeg" alt="Model banner">
<div style="text-align: center;">
<div class="creator-section">
<div class="creator-badge">
<span class="creator-label">Created by</span>
<a href="https://huggingface.co/Delta-Vector" target="_blank" class="creator-link">
<span class="creator-name">Delta-Vector</span>
<span class="creator-arrow">→</span>
</a>
</div>
</div>
<div class="model-info">
<h2>Model Information</h2>
<div class="info-card">
<div class="info-header">
<h3>Sol-Reaver-15B-Instruct</h3>
<div class="model-tags">
<span class="model-tag">15B parameters</span>
<span class="model-tag">Creative / Fresh Prose</span>
<span class="model-tag">Co-writing/Roleplay/Adventure Generalist</span>
</div>
</div>
<div class="model-description">
<p>The first in the line of a New series of Roleplay / Adventure / Co-writer Models - Finetuned ontop of Sol-Reaver-15B-Pretrain</p>
<p>This model has been trained on 200M tokens of high quality Instruct data, It's focus is to provide a base for further finetuning|Merging</p>
<p>It's goal is to have refreshing Prose, Creativity, Good Instruct following and the *Brains*.</p>
<p>Support me on Ko-Fi: https://ko-fi.com/deltavector</p>
</div>
</div>
</div>
<div class="section">
<h2>Quantized Versions</h2>
<div class="info-card">
<div class="model-composition">
<h4>Available Downloads</h4>
<ul class="composition-list">
<li><span class="model-component"><a href="" target="_blank">GGUF Format</a></span>For use with LLama.cpp & Forks(Coming Soon!)</li>
<li><span class="model-component"><a href="" target="_blank">EXL2 Format</a></span>For use with TabbyAPI (Coming Soon!)</li>
<li><span class="model-component"><a href="" target="_blank">EXL3 Format</a></span>For use with TabbyAPI (Slower on Ampere))</li>
</ul>
</div>
</div>
</div>
<div class="section">
<h2>Prompting</h2>
<p>Model has been tuned with the ChatML formatting. A typical input would look like this:</p>
<pre><code><|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
</code></pre>
</div>
<div class="section">
<h2>Samplers</h2>
<p>For testing of this model, I used Temp=1, 0.1 Min-P.</p>
<div class="metrics-section">
<details>
<summary>See Axolotl Config</summary>
<pre><code>
https://files.catbox.moe/u9dakg.yml
</code></pre>
</details>
</div>
</div>
<div class="section">
<h2>Training</h2>
<p>The training was done for 2 epoch using 8 x <a href="https://www.nvidia.com/en-us/data-center/h200/">H200s</a> GPUs graciously provided by <a href="https://huggingface.co/kalomaze">Kalomaze</a> for the fine-tuning of the model.</p>
<div class="axolotl-container">
<a href="https://github.com/OpenAccess-AI-Collective/axolotl" target="_blank">
<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl">
</a>
</div>
</div>
<div class="section">
<h2>Credits</h2>
<p>Thank you to <a href="https://huggingface.co/lucyknada">Lucy Knada</a>, <a href="https://huggingface.co/Ateron">Ateron</a>, <a href="https://huggingface.co/AliCat2">Alicat</a>, <a href="https://huggingface.co/intervitens">Intervitens</a>, <a href="https://huggingface.co/cgato">Cgato</a>, <a href="https://huggingface.co/kubernetes-bad">Kubernetes Bad</a> and the rest of <a href="https://huggingface.co/anthracite-org">Anthracite</a>.</p>
</div>
</div>
</div>
</body>
</html> |
test-gen/qwen2-1.5b-random_lr1e-5 | test-gen | 2025-05-26T03:37:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2025-05-26T03:33:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Gonnn12/three_object_200_step | Gonnn12 | 2025-05-26T03:36:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T03:36:03Z | ---
base_model: unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Gonnn12
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cwhite214/kjones | cwhite214 | 2025-05-26T03:35:11Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-26T03:08:23Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: kjones
---
# Kjones
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `kjones` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "kjones",
"lora_weights": "https://huggingface.co/cwhite214/kjones/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('cwhite214/kjones', weight_name='lora.safetensors')
image = pipeline('kjones').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/cwhite214/kjones/discussions) to add images that show off what you’ve made with this LoRA.
|
Jean1489/xml-roberta-prostata-ner | Jean1489 | 2025-05-26T03:32:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:adapter:FacebookAI/xlm-roberta-large",
"region:us"
]
| null | 2025-05-26T03:29:50Z | ---
base_model: FacebookAI/xlm-roberta-large
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
mradermacher/Quokka_111m-i1-GGUF | mradermacher | 2025-05-26T03:29:04Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:the_pile",
"dataset:guanaco/guanaco",
"base_model:Corianas/Quokka_111m",
"base_model:quantized:Corianas/Quokka_111m",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-05-26T03:04:15Z | ---
base_model: Corianas/Quokka_111m
datasets:
- the_pile
- guanaco/guanaco
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Corianas/Quokka_111m
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Quokka_111m-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Quokka_111m-i1-GGUF/resolve/main/Quokka_111m.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RayneAmes/justinbieber_v2 | RayneAmes | 2025-05-26T03:26:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-02-23T05:25:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RayneAmes/justinbieber_v3 | RayneAmes | 2025-05-26T03:26:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-02-23T05:27:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/aaronGPTplus-i1-GGUF | mradermacher | 2025-05-26T03:18:15Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:totallynotbrent/aaronGPTplus",
"base_model:quantized:totallynotbrent/aaronGPTplus",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-05-26T02:40:29Z | ---
base_model: totallynotbrent/aaronGPTplus
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/totallynotbrent/aaronGPTplus
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/aaronGPTplus-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ1_S.gguf) | i1-IQ1_S | 0.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ1_M.gguf) | i1-IQ1_M | 0.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ2_S.gguf) | i1-IQ2_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ2_M.gguf) | i1-IQ2_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q2_K.gguf) | i1-Q2_K | 0.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ3_S.gguf) | i1-IQ3_S | 0.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ3_M.gguf) | i1-IQ3_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q4_0.gguf) | i1-Q4_0 | 0.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q4_1.gguf) | i1-Q4_1 | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTplus-i1-GGUF/resolve/main/aaronGPTplus.i1-Q6_K.gguf) | i1-Q6_K | 0.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
LandCruiser/sn29_cold_2605_1 | LandCruiser | 2025-05-26T03:17:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T01:55:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tadkt/GOT_Vietnamese | tadkt | 2025-05-26T03:14:29Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"GOT",
"feature-extraction",
"got",
"vision-language",
"ocr2.0",
"got_vietnamese",
"image-text-to-text",
"custom_code",
"vi",
"en",
"license:apache-2.0",
"region:us"
]
| image-text-to-text | 2024-11-24T14:56:24Z | ---
license: apache-2.0
language:
- vi
- en
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- got
- vision-language
- ocr2.0
- got_vietnamese
---
## Usage
Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.10:
```
torch==2.0.1
torchvision==0.15.2
transformers==4.37.2
tiktoken==0.6.0
verovio==4.3.1
accelerate==0.28.0
```
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('tadkt/GOT_Vietnamese', trust_remote_code=True)
model = AutoModel.from_pretrained('tadkt/GOT_Vietnamese', trust_remote_code=True, low_cpu_mem_usage=True, device_map='cuda', use_safetensors=True, pad_token_id=tokenizer.eos_token_id)
model = model.eval().cuda()
# input your test image
image_file = 'xxx.jpg'
# plain texts OCR
res = model.chat(tokenizer, image_file, ocr_type='ocr')
print(res)
``` |
tacoma1776/MYSELF1976 | tacoma1776 | 2025-05-26T00:43:26Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-26T00:25:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MYSELF1976
---
# Myself1976
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MYSELF1976` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MYSELF1976",
"lora_weights": "https://huggingface.co/tacoma1776/MYSELF1976/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tacoma1776/MYSELF1976', weight_name='lora.safetensors')
image = pipeline('MYSELF1976').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tacoma1776/MYSELF1976/discussions) to add images that show off what you’ve made with this LoRA.
|
btly/drru | btly | 2025-05-26T00:43:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T00:27:04Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
bigband/JustRa | bigband | 2025-05-26T00:42:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T00:26:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
bigband/EndlessTezcatlipoca | bigband | 2025-05-26T00:41:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T00:16:01Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
bigband/VisionaryPoseidon | bigband | 2025-05-26T00:41:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T00:32:03Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
dimasik87/d4866018-9dc1-4503-8029-4ee72b42acab | dimasik87 | 2025-05-26T00:39:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Theta-Llama-3-8B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-25T23:16:33Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d4866018-9dc1-4503-8029-4ee72b42acab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- cf8606bc5af5442f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: dimasik87/d4866018-9dc1-4503-8029-4ee72b42acab
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/cf8606bc5af5442f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1f013e5f-2248-4122-86e5-3fe07fb937ab
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 1f013e5f-2248-4122-86e5-3fe07fb937ab
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# d4866018-9dc1-4503-8029-4ee72b42acab
This model is a fine-tuned version of [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5627 | 0.0001 | 1 | 1.5432 |
| 1.224 | 0.0139 | 250 | 1.1310 |
| 0.9114 | 0.0277 | 500 | 1.0587 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bigband/IllustriousKrishna | bigband | 2025-05-26T00:38:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T00:26:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
IzzulGod/GPT2-Indo-chat-tuned | IzzulGod | 2025-05-26T00:36:24Z | 0 | 2 | null | [
"safetensors",
"gpt2",
"id",
"dataset:FreedomIntelligence/evol-instruct-indonesian",
"base_model:cahya/gpt2-small-indonesian-522M",
"base_model:finetune:cahya/gpt2-small-indonesian-522M",
"license:mit",
"region:us"
]
| null | 2025-05-25T05:36:51Z | ---
license: mit
datasets:
- FreedomIntelligence/evol-instruct-indonesian
language:
- id
base_model:
- cahya/gpt2-small-indonesian-522M
---
# GPT-2 Indonesian Chat Instruct-Tuned Model
An Indonesian conversational AI model fine-tuned from `GPT2-Small` using instruction-following techniques to enable chat-like interactions.
## 📋 Model Overview
This model transforms a base Indonesian GPT-2 text generator into a conversational chatbot capable of following instructions and engaging in question-answering dialogues in Bahasa Indonesia.
- **Base Model**: `cahya/gpt2-small-indonesian-522M`
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **Dataset**: `FreedomIntelligence/evol-instruct-indonesian`
- **Language**: Indonesian (Bahasa Indonesia)
- **Task**: Conversational AI / Chat Completion
## 🧪 Project Background
This model was fine-tuned as part of my personal learning journey in AI and LLMs. The training was done entirely on Google Colab (free tier, T4 GPU), as an exercise in building Indonesian conversational AI with limited resources.
## 🚀 Quick Start
### Installation
```bash
pip install transformers torch
```
### Basic Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Setup device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")
# Load model and tokenizer
model_path = "IzzulGod/GPT2-Indo-chat-tuned"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path).to(device)
# Generate response
prompt = "User: Siapa presiden pertama Indonesia?\nAI:"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=50,
do_sample=True,
temperature=0.6,
top_p=0.95,
repetition_penalty=1.2,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Example Output
```
User: Siapa presiden pertama Indonesia?
AI: Presiden pertama Indonesia adalah Soekarno. Sukarno dikenal sebagai seorang pemimpin yang sangat dihormati dan dicintai oleh rakyatnya, terutama di kalangan rakyat Indonesia karena perananya dalam membentuk persatuan bangsa Indonesia. Dia juga dianggap sebagai sosok kunci bagi seluruh masyarakat Indonesia untuk mempertahankan kemerdekaan negara tersebut dari penjajahan Belanda.
```
## 🎯 Model Capabilities
- **Question Answering**: Responds to factual questions in Indonesian
- **Instruction Following**: Capable of following various instructions and tasks
- **Conversational Context**: Maintains context in chat-like interactions
- **Code Generation**: Can generate simple code snippets (R, Python, etc.) with Indonesian explanations
## 📊 Training Details
### Dataset
The model was trained on the `FreedomIntelligence/evol-instruct-indonesian` dataset, which contains conversational data in the following format:
```json
[
{
"from": "human",
"value": "Question or instruction in Indonesian"
},
{
"from": "gpt",
"value": "Detailed response in Indonesian"
}
]
```
### Training Configuration
The model was fine-tuned using LoRA (Low-Rank Adaptation) with aggressive parameter injection across key GPT-2 layers:
**LoRA Configuration:**
- `r`: 64 (rank)
- `lora_alpha`: 128
- `target_modules`: ["c_attn", "c_proj", "mlp.c_fc", "mlp.c_proj"]
- `lora_dropout`: 0.05
- `bias`: "none"
**Training Arguments:**
- `epochs`: 3
- `batch_size`: 16 per device
- `gradient_accumulation_steps`: 2
- `learning_rate`: 2e-4
- `scheduler`: cosine
- `weight_decay`: 0.01
- `fp16`: enabled
### Training Results
```
Final Training Loss: 2.692
Total Steps: 2,766
Training Time: ~1h 45m
```
The model showed consistent improvement with loss decreasing from 3.44 to 2.51 over the training period.
## 🔧 Advanced Usage
### Custom Generation Parameters
```python
# For more creative responses
outputs = model.generate(
**inputs,
max_new_tokens=100,
do_sample=True,
temperature=0.8,
top_p=0.9,
repetition_penalty=1.3
)
# For more focused responses
outputs = model.generate(
**inputs,
max_new_tokens=50,
do_sample=True,
temperature=0.4,
top_p=0.95,
repetition_penalty=1.1
)
```
### Prompt Format
The model expects prompts in the following format:
```
User: [Your question or instruction in Indonesian]
AI:
```
## ⚠️ Limitations
- **Knowledge Base**: The base model was trained primarily on Wikipedia data by [Cahya](https://huggingface.co/cahya), providing general factual knowledge but limited real-world conversational patterns
- **Training Data Scope**: Current fine-tuning focuses on general instruction-following and Q&A rather than natural daily conversations
- **Conversational Style**: Responses may feel formal or academic due to the Wikipedia-based foundation and instruction-tuned nature
- **Model Size**: Relatively small (124M parameters), which may limit complex reasoning capabilities
- **Factual Accuracy**: Responses are generated based on training data and may not always be factually accurate or up-to-date
- **Language Optimization**: Best performance is achieved with Indonesian language inputs
- **Response Consistency**: May occasionally generate repetitive or inconsistent responses
## 🚀 Future Improvements
For enhanced conversational naturalness, consider:
- **Conversational Dataset Training**: Fine-tuning with Indonesian daily conversation datasets
- **Lighter LoRA Configuration**: Using more efficient LoRA parameters for conversation-specific training
- **Multi-turn Dialogue**: Training on multi-turn conversation data for better context handling
- **Informal Language Patterns**: Incorporating colloquial Indonesian expressions and casual speech patterns
## 📝 License
This model is released under the MIT License. See the LICENSE file for details.
## 🤝 Contributing
Contributions, issues, and feature requests are welcome! Feel free to check the issues page.
## 📚 Citation
If you use this model in your research or applications, please cite:
```bibtex
@misc{izzulgod2025gpt2indochat,
title = {GPT-2 Indonesian Chat Instruct-Tuned Model},
author = {IzzulGod},
year = {2025},
howpublished = {\url{https://huggingface.co/IzzulGod/GPT2-Indo-chat-tuned}},
note = {Fine-tuned version of cahya/gpt2-small-indonesian-522M}
}
```
---
*Disclaimer: This model was developed as an experimental project for learning purposes. While it performs well on basic tasks, it may have limitations in reasoning and real-world usage.*
|
bigband/OmnipotentKhepri | bigband | 2025-05-26T00:35:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T00:25:16Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
bigband/ProteanTlaloc | bigband | 2025-05-26T00:33:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T00:25:12Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
Remade-AI/Arc_shot | Remade-AI | 2025-05-26T00:32:41Z | 3 | 1 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"image-to-video",
"en",
"base_model:Wan-AI/Wan2.1-I2V-14B-480P",
"base_model:adapter:Wan-AI/Wan2.1-I2V-14B-480P",
"license:apache-2.0",
"region:us"
]
| image-to-video | 2025-05-24T21:16:33Z | ---
license: apache-2.0
language:
- en
base_model:
- Wan-AI/Wan2.1-I2V-14B-480P
- Wan-AI/Wan2.1-I2V-14B-480P-Diffusers
pipeline_tag: image-to-video
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
- image-to-video
widget:
- text: >-
A young Black man wearing a grey baseball cap, a gold chain, and a black shirt stands in a recording studio, singing into a microphone. The background features a neon sign that says "REMADE" and a red couch. The 34Ar2c arc the camera moves in a smooth curve around the man, shifting the perspective around him as he performs with passion.
output:
url: example_videos/1.mp4
- text: >-
A woman with dark hair executes a high kick, kicking up water droplets, against a futuristic man in a mask, in a neon-lit cyberpunk street. The 34Ar2c arc the camera moves in a smooth curve around the two fighters, revealing more of the scene and their confrontation.
output:
url: example_videos/2.mp4
- text: >-
An elderly woman with white hair and sunglasses is seated on a subway train. She wears a dark coat and is lighting a cigarette with a match while reading a book. The 34Ar2c arc the camera moves in a smooth curve around the woman, showing her from different angles as she reads.
output:
url: example_videos/3.mp4
---
<div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
<h1 style="color: #24292e; margin-top: 0;">Arc shot LoRA for Wan2.1 14B I2V 480p</h1>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Overview</h2>
<p>This LoRA is trained on the Wan2.1 14B I2V 480p model.Moves the camera in a smooth, curved path around the subject, adding depth and cinematic motion. Ideal for dramatic reveals or emotional emphasis.
</p>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Features</h2>
<ul style="margin-bottom: 0;">
<li>Trained on the Wan2.1 14B 480p I2V base model</li>
<li>Consistent results across different object types</li>
<li>Simple prompt structure that's easy to adapt</li>
</ul>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Community</h2>
<ul style="margin-bottom: 0;">
<li>
Generate videos with 100+ Camera Control and VFX LoRAs on the
<a href="https://app.remade.ai/canvas/create" style="color: #0366d6; text-decoration: none;">Remade Canvas</a>.
</li>
<li>
<b>Discord:</b>
<a href="https://remade.ai/join-discord?utm_source=Huggingface&utm_medium=Social&utm_campaign=model_release&utm_content=arc_shot" style="color: #0366d6; text-decoration: none;">
Join our community
</a> to generate videos with this LoRA for free
</li>
</ul>
</div>
<Gallery />
# Model File and Inference Workflow
## 📥 Download Links:
- [Arc_shot.safetensors](./Arc_shot.safetensors) - LoRA Model File
- [wan_img2vid_lora_workflow.json](./workflow_I2V/wan_img2vid_lora_workflow.json) - Wan I2V with LoRA Workflow for ComfyUI
---
<div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Recommended Settings</h2>
<ul style="margin-bottom: 0;">
<li><b>LoRA Strength:</b> 1.0</li>
<li><b>Embedded Guidance Scale:</b> 6.0</li>
<li><b>Flow Shift:</b> 5.0</li>
</ul>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Trigger Words</h2>
<p>The key trigger phrase is: <code style="background-color: #f0f0f0; padding: 3px 6px; border-radius: 4px;">34Ar2c arc the camera moves in a smooth curve around</code></p>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Prompt Template</h2>
<p>For prompting, check out the example prompts; this way of prompting seems to work very well.</p>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">ComfyUI Workflow</h2>
<p>This LoRA works with a modified version of <a href="https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/example_workflows/wanvideo_480p_I2V_example_02.json" style="color: #0366d6; text-decoration: none;">Kijai's Wan Video Wrapper workflow</a>. The main modification is adding a Wan LoRA node connected to the base model.</p>
<img src="./workflow_I2V/workflow_screenshot.png" style="width: 100%; border-radius: 8px; margin: 15px 0; box-shadow: 0 4px 8px rgba(0,0,0,0.1);">
<p>See the Downloads section above for the modified workflow.</p>
</div>
</div>
<div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Model Information</h2>
<p>The model weights are available in Safetensors format. See the Downloads section above.</p>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Training Details</h2>
<ul style="margin-bottom: 0;">
<li><b>Base Model:</b> Wan2.1 14B I2V 480p</li>
<li><b>Training Data:</b> Trained on 50 seconds of video comprised of 10 short clips (each clip captioned separately) of arc shots.</li>
<li><b> Epochs:</b> 15</li>
</ul>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Additional Information</h2>
<p>Training was done using <a href="https://github.com/tdrussell/diffusion-pipe" style="color: #0366d6; text-decoration: none;">Diffusion Pipe for Training</a></p>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Acknowledgments</h2>
<p style="margin-bottom: 0;">Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!</p>
</div>
</div> |
Remade-AI/Crane_up | Remade-AI | 2025-05-26T00:31:24Z | 3 | 1 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"image-to-video",
"en",
"base_model:Wan-AI/Wan2.1-I2V-14B-480P",
"base_model:adapter:Wan-AI/Wan2.1-I2V-14B-480P",
"license:apache-2.0",
"region:us"
]
| image-to-video | 2025-05-24T21:19:57Z | ---
license: apache-2.0
language:
- en
base_model:
- Wan-AI/Wan2.1-I2V-14B-480P
- Wan-AI/Wan2.1-I2V-14B-480P-Diffusers
pipeline_tag: image-to-video
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
- image-to-video
widget:
- text: >-
A woman in a white dress is walking across a large, reflective body of water with mountains in the background. The words "REMADE" are etched into the surface near the water. The cr4n3 crane up effect is applied, smoothly lifting the camera's viewpoint higher, revealing more of the surrounding landscape, including the shoreline and the vast expanse of water ahead as the woman continues walking into the distance.
output:
url: example_videos/1.mp4
- text: >-
Six monks in orange robes walk in a line along a stone path towards a large temple complex shrouded in mist. The scene is captured with a cr4n3 crane up effect, smoothly rising, revealing more of the temple's architectural details and the expanse of the courtyard as they continue to walk towards the temple in the distance.
output:
url: example_videos/2.mp4
- text: >-
In a dark, cavernous room, a young woman with red pigtails sits in a wooden chair, facing a wall of vintage televisions, each displaying a different moving image. The woman turns her head toward one of the screens, and then the cr4n3 crane up effect begins, smoothly lifting the camera's viewpoint higher. As the cr4n3 crane up effect continues, more of the room and the wall of televisions displaying varied scenes become visible, with the programs on the television screens reflecting off the wet floor.
output:
url: example_videos/3.mp4
---
<div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
<h1 style="color: #24292e; margin-top: 0;">Crane up LoRA for Wan2.1 14B I2V 480p</h1>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Overview</h2>
<p>Lifts the camera smoothly upward to reveal a scene from above. Ideal for grand reveals, transitions, or adding cinematic elevation to a moment.This LoRA is trained on the Wan2.1 14B I2V 480p model.
</p>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Features</h2>
<ul style="margin-bottom: 0;">
<li>Trained on the Wan2.1 14B 480p I2V base model</li>
<li>Consistent results across different object types</li>
<li>Simple prompt structure that's easy to adapt</li>
</ul>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Community</h2>
<ul style="margin-bottom: 0;">
<li>
Generate videos with 100+ Camera Control and VFX LoRAs on the
<a href="https://app.remade.ai/canvas/create" style="color: #0366d6; text-decoration: none;">Remade Canvas</a>.
</li>
<li>
<b>Discord:</b>
<a href="https://remade.ai/join-discord?utm_source=Huggingface&utm_medium=Social&utm_campaign=model_release&utm_content=crane_up" style="color: #0366d6; text-decoration: none;">
Join our community
</a> to generate videos with this LoRA for free
</li>
</ul>
</div>
<Gallery />
# Model File and Inference Workflow
## 📥 Download Links:
- [Crane_up.safetensors](./Crane_up.safetensors) - LoRA Model File
- [wan_img2vid_lora_workflow.json](./workflow_I2V/wan_img2vid_lora_workflow.json) - Wan I2V with LoRA Workflow for ComfyUI
---
<div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Recommended Settings</h2>
<ul style="margin-bottom: 0;">
<li><b>LoRA Strength:</b> 1.0</li>
<li><b>Embedded Guidance Scale:</b> 6.0</li>
<li><b>Flow Shift:</b> 5.0</li>
</ul>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Trigger Words</h2>
<p>The key trigger phrase is: <code style="background-color: #f0f0f0; padding: 3px 6px; border-radius: 4px;">cr4n3 crane up effect</code></p>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Prompt Template</h2>
<p>For prompting, check out the example prompts; this way of prompting seems to work very well.</p>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">ComfyUI Workflow</h2>
<p>This LoRA works with a modified version of <a href="https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/example_workflows/wanvideo_480p_I2V_example_02.json" style="color: #0366d6; text-decoration: none;">Kijai's Wan Video Wrapper workflow</a>. The main modification is adding a Wan LoRA node connected to the base model.</p>
<img src="./workflow_I2V/workflow_screenshot.png" style="width: 100%; border-radius: 8px; margin: 15px 0; box-shadow: 0 4px 8px rgba(0,0,0,0.1);">
<p>See the Downloads section above for the modified workflow.</p>
</div>
</div>
<div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Model Information</h2>
<p>The model weights are available in Safetensors format. See the Downloads section above.</p>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Training Details</h2>
<ul style="margin-bottom: 0;">
<li><b>Base Model:</b> Wan2.1 14B I2V 480p</li>
<li><b>Training Data:</b> Trained on 50 seconds of video comprised of 10 short clips (each clip captioned separately) of scenes that used the crane up camera motion.</li>
<li><b> Epochs:</b> 30</li>
</ul>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Additional Information</h2>
<p>Training was done using <a href="https://github.com/tdrussell/diffusion-pipe" style="color: #0366d6; text-decoration: none;">Diffusion Pipe for Training</a></p>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Acknowledgments</h2>
<p style="margin-bottom: 0;">Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!</p>
</div>
</div> |
bigband/MercifulTonatiuh | bigband | 2025-05-26T00:31:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T00:18:01Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
yunjae-won/mp_mistral7bv3_sft_dpo_beta1e-1_epoch1_ratio | yunjae-won | 2025-05-26T00:28:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T00:23:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
btly/gayi | btly | 2025-05-26T00:18:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T00:09:20Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
Oceans-ID/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scented_darting_shrew | Oceans-ID | 2025-05-26T00:15:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scented darting shrew",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-25T15:31:46Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scented_darting_shrew
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scented darting shrew
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scented_darting_shrew
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Oceans-ID/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scented_darting_shrew", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ecdlp/flux-poncik | ecdlp | 2025-05-26T00:12:40Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-26T00:12:38Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ponçik
---
# Flux Poncik
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ponçik` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ponçik",
"lora_weights": "https://huggingface.co/ecdlp/flux-poncik/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ecdlp/flux-poncik', weight_name='lora.safetensors')
image = pipeline('ponçik').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4000
- Learning rate: 0.0001
- LoRA rank: 32
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ecdlp/flux-poncik/discussions) to add images that show off what you’ve made with this LoRA.
|
RayneAmes/primeape_v2 | RayneAmes | 2025-05-26T00:10:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-02-13T17:43:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MY628/ppo-LunarLander-v2 | MY628 | 2025-05-26T00:04:05Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-26T00:03:47Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.17 +/- 20.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
glif-loradex-trainer/Swap_agrawal14_zingy_nft | glif-loradex-trainer | 2025-05-25T23:58:12Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
]
| text-to-image | 2025-05-25T23:58:03Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1748217440156__000001500_0.jpg
text: Doctor $wap_zing_NFT
- output:
url: samples/1748217465023__000001500_1.jpg
text: A mad creepy venomous batman $wap_zing_NFT
base_model: black-forest-labs/FLUX.1-dev
trigger: "$wap_zing_NFT"
instance_prompt: "$wap_zing_NFT"
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# zingy_nft
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `Swap_agrawal14`.
<Gallery />
## Trigger words
You should use `$wap_zing_NFT` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/Swap_agrawal14_zingy_nft/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
vermoney/fa7c9ac1-c037-49cc-8ea3-26aa724f78d9 | vermoney | 2025-05-25T23:54:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Theta-Llama-3-8B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-25T23:19:40Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fa7c9ac1-c037-49cc-8ea3-26aa724f78d9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cf8606bc5af5442f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vermoney/fa7c9ac1-c037-49cc-8ea3-26aa724f78d9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 96
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 48
lora_target_linear: true
lr_scheduler: cosine
max_steps: 280
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/cf8606bc5af5442f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1f013e5f-2248-4122-86e5-3fe07fb937ab
wandb_project: s56-9
wandb_run: your_name
wandb_runid: 1f013e5f-2248-4122-86e5-3fe07fb937ab
warmup_steps: 40
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# fa7c9ac1-c037-49cc-8ea3-26aa724f78d9
This model is a fine-tuned version of [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- training_steps: 280
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9173 | 0.0155 | 280 | 1.1675 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mlfoundations-dev/openthoughts3_100k_code_swap_r1 | mlfoundations-dev | 2025-05-25T23:48:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-25T18:33:30Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: openthoughts3_100k_code_swap_r1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openthoughts3_100k_code_swap_r1
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/openthoughts3_100k_code_swap_r1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- total_eval_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Subsets and Splits