modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 06:27:53
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 06:27:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
inaf-oact-ai/radiollava-7b-qacapt | inaf-oact-ai | 2025-04-02T12:08:53Z | 1 | 0 | null | [
"safetensors",
"llava",
"radioastronomy",
"image-text-to-text",
"conversational",
"en",
"arxiv:2503.23859",
"base_model:lmms-lab/llava-onevision-qwen2-7b-ov",
"base_model:finetune:lmms-lab/llava-onevision-qwen2-7b-ov",
"license:gpl-3.0",
"region:us"
]
| image-text-to-text | 2025-03-27T16:14:09Z | ---
license: gpl-3.0
language:
- en
base_model:
- lmms-lab/llava-onevision-qwen2-7b-ov
pipeline_tag: image-text-to-text
tags:
- radioastronomy
---
# radiollava-7b-qacapt
https://arxiv.org/abs/2503.23859
radiollava is a domain-specialized vision-language AI assistant tailored for research in radioastronomy, in particular for running
radio source analysis tasks on radio-continuum images. It was trained on ~1.5M user-assistant conversations relative to ~55k radio
images taken from various radio surveys, including ASKAP-EMU, MeerKAT SMGPS and VLA FIRST, and on a set of ~38k image-caption pairs extracted
from arXiv papers (2000-2025) with keywords on radioastronomical topics and techniques.
## Model Details
- **Base Architecture**: llava-onevision
- **Base Model**: llava-onevision-qwen2-7b-ov
- **Parameters**: 7 billion
- **Domain**: Radio Astronomy
- **License**: GPL 3.0 License
- **Development Process**: Supervised Fine-tuning (SFT) on QA pairs
## Using the model
To use this model, you need to install LLaVA-NeXT as described in this repository:
`https://github.com/LLaVA-VL/LLaVA-NeXT`
LLaVA-NeXT requires an outdated version of the `transformers` library (v4.40.0).
To load the model:
```python
from llava.model.builder import load_pretrained_model
tokenizer, model, image_processor, max_length = load_pretrained_model(
model_name_or_path="inaf-oact-ai/radiollava-7b-qacapt",
model_base=None,
model_name="llava_qwen",
device_map="auto"
)
```
To run model inference on an input image:
```python
import torch
from PIL import Image
from llava.model.builder import load_pretrained_model
from llava.mm_utils import process_images, tokenizer_image_token
from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN
from llava.conversation import conv_templates
# - Load model
tokenizer, model, image_processor, max_length = load_pretrained_model(
model_name_or_path="inaf-oact-ai/radiollava-7b-qa",
model_base=None,
model_name="llava_qwen",
device_map="auto"
)
# - Load image
image_path= ...
image= Image.fromarray(data).convert("RGB")
# - Process image
image_tensor = process_images([image], image_processor, model.config)
image_tensor = [_image.to(dtype=torch.float16, device=model.device) for _image in image_tensor]
# - Create prompt
query= "Describe the input image" # Replace it with your query
question = DEFAULT_IMAGE_TOKEN + "\n" + query
conv = copy.deepcopy(conv_templates[conv_template])
conv.system= '<|im_start|>system\nYou are an AI assistant specialized in radio astronomical topics.'
conv.append_message(conv.roles[0], question)
conv.append_message(conv.roles[1], None)
prompt_question = conv.get_prompt()
# - Create model inputs
input_ids = tokenizer_image_token(prompt_question, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(model.device)
image_sizes = [image.size]
# - Generate model response
# Change generation parameters as you wish
do_sample=True
temperature= 0.3
max_new_tokens=4096
output = model.generate(
input_ids,
images=image_tensor,
image_sizes=image_sizes,
do_sample=do_sample,
temperature=temperature if do_sample else None,
max_new_tokens=max_new_tokens,
)
output_parsed= tokenizer.decode(output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
# - Process response as you wish ...
#response= output_parsed.strip("\n").strip()
```
See the tutorials available in the LLaVA-NeXT repository:
`https://github.com/LLaVA-VL/LLaVA-NeXT/blob/main/docs/LLaVA_OneVision_Tutorials.ipynb`
Further usage examples are provided in this repository:
`https://github.com/SKA-INAF/radio-llava.git` |
prithivMLmods/Hand-Gesture-2-Robot | prithivMLmods | 2025-04-02T12:08:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"siglip",
"image-classification",
"Robot",
"Hand-Gesture",
"SigLIP2",
"code",
"Sign",
"en",
"dataset:ShadiAbpeikar/HandGesture2Robot",
"base_model:google/siglip2-base-patch16-224",
"base_model:finetune:google/siglip2-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-04-01T17:29:45Z | ---
license: apache-2.0
datasets:
- ShadiAbpeikar/HandGesture2Robot
language:
- en
base_model:
- google/siglip2-base-patch16-224
pipeline_tag: image-classification
library_name: transformers
tags:
- Robot
- Hand-Gesture
- SigLIP2
- code
- Sign
---

# **Hand-Gesture-2-Robot**
> **Hand-Gesture-2-Robot** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to recognize hand gestures and map them to specific robot commands using the **SiglipForImageClassification** architecture.
```py
Classification Report:
precision recall f1-score support
"rotate anticlockwise" 0.9926 0.9958 0.9942 944
"increase" 0.9975 0.9975 0.9975 789
"release" 0.9941 1.0000 0.9970 670
"switch" 1.0000 0.9986 0.9993 728
"look up" 0.9984 0.9984 0.9984 635
"Terminate" 0.9983 1.0000 0.9991 580
"decrease" 0.9942 1.0000 0.9971 684
"move backward" 0.9986 0.9972 0.9979 725
"point" 0.9965 0.9913 0.9939 1716
"rotate clockwise" 1.0000 1.0000 1.0000 868
"grasp" 0.9922 0.9961 0.9941 767
"pause" 0.9991 1.0000 0.9995 1079
"move forward" 1.0000 0.9944 0.9972 886
"Confirm" 0.9983 0.9983 0.9983 573
"look down" 0.9985 0.9970 0.9977 664
"move left" 0.9952 0.9968 0.9960 622
"move right" 1.0000 1.0000 1.0000 622
accuracy 0.9972 13552
macro avg 0.9973 0.9977 0.9975 13552
weighted avg 0.9972 0.9972 0.9972 13552
```

The model categorizes hand gestures into 17 different robot commands:
- **Class 0:** "rotate anticlockwise"
- **Class 1:** "increase"
- **Class 2:** "release"
- **Class 3:** "switch"
- **Class 4:** "look up"
- **Class 5:** "Terminate"
- **Class 6:** "decrease"
- **Class 7:** "move backward"
- **Class 8:** "point"
- **Class 9:** "rotate clockwise"
- **Class 10:** "grasp"
- **Class 11:** "pause"
- **Class 12:** "move forward"
- **Class 13:** "Confirm"
- **Class 14:** "look down"
- **Class 15:** "move left"
- **Class 16:** "move right"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Hand-Gesture-2-Robot"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def gesture_classification(image):
"""Predicts the robot command from a hand gesture image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "rotate anticlockwise",
"1": "increase",
"2": "release",
"3": "switch",
"4": "look up",
"5": "Terminate",
"6": "decrease",
"7": "move backward",
"8": "point",
"9": "rotate clockwise",
"10": "grasp",
"11": "pause",
"12": "move forward",
"13": "Confirm",
"14": "look down",
"15": "move left",
"16": "move right"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=gesture_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Hand Gesture to Robot Command",
description="Upload an image of a hand gesture to predict the corresponding robot command."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
# **Intended Use:**
The **Hand-Gesture-2-Robot** model is designed to classify hand gestures into corresponding robot commands. Potential use cases include:
- **Human-Robot Interaction:** Enabling intuitive control of robots using hand gestures.
- **Assistive Technology:** Helping individuals with disabilities communicate commands.
- **Industrial Automation:** Enhancing robotic operations in manufacturing.
- **Gaming & VR:** Providing gesture-based controls for immersive experiences.
- **Security & Surveillance:** Implementing gesture-based access control. |
LHRuig/alexmecumsx | LHRuig | 2025-04-02T12:08:18Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
]
| text-to-image | 2025-04-02T12:07:47Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: alexmecumsx
---
# alexmecumsx
<Gallery />
## Model description
alexmecumsx lora
## Trigger words
You should use `alexmecumsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/alexmecumsx/tree/main) them in the Files & versions tab.
|
samoline/447b0066-b993-44a2-9ebd-51b765cf8ee0 | samoline | 2025-04-02T12:07:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"region:us"
]
| null | 2025-04-02T12:03:53Z | ---
library_name: peft
license: mit
base_model: microsoft/phi-1_5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 447b0066-b993-44a2-9ebd-51b765cf8ee0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/phi-1_5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a2f9c242bfd30576_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a2f9c242bfd30576_train_data.json
type:
field_instruction: user_prompt
field_output: resp
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: samoline/447b0066-b993-44a2-9ebd-51b765cf8ee0
hub_repo: samoline
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 4
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2
micro_batch_size: 1
mlflow_experiment_name: /tmp/a2f9c242bfd30576_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: samoline-nan
wandb_mode: online
wandb_name: 5179832c-a5ba-4a0e-8dae-8822b782d993
wandb_project: Gradients-On-Demand
wandb_run: dev
wandb_runid: 5179832c-a5ba-4a0e-8dae-8822b782d993
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 447b0066-b993-44a2-9ebd-51b765cf8ee0
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1431 | 0.0000 | 1 | 1.7900 |
| 1.4591 | 0.0001 | 2 | 1.7897 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Gacrypto/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-snappy_darting_macaque | Gacrypto | 2025-04-02T12:02:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am snappy darting macaque",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T11:58:04Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-snappy_darting_macaque
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am snappy darting macaque
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-snappy_darting_macaque
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Gacrypto/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-snappy_darting_macaque", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf | RichardErkhov | 2025-04-02T12:01:52Z | 0 | 0 | null | [
"gguf",
"arxiv:2404.14219",
"arxiv:2407.13833",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T10:38:56Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544 - GGUF
- Model creator: https://huggingface.co/besimray/
- Original model: https://huggingface.co/besimray/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q2_K.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q2_K.gguf) | Q2_K | 1.35GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.IQ3_XS.gguf) | IQ3_XS | 1.49GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.IQ3_S.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.IQ3_M.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q3_K.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q3_K.gguf) | Q3_K | 1.75GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q4_0.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q4_0.gguf) | Q4_0 | 2.03GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q4_K.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q4_K.gguf) | Q4_K | 2.16GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q4_1.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q4_1.gguf) | Q4_1 | 2.24GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q5_0.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q5_0.gguf) | Q5_0 | 2.46GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q5_K.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q5_K.gguf) | Q5_K | 2.53GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q5_1.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q5_1.gguf) | Q5_1 | 2.68GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q6_K.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q6_K.gguf) | Q6_K | 2.92GB |
| [miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q8_0.gguf](https://huggingface.co/RichardErkhov/besimray_-_miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544-gguf/blob/main/miner_id_3_84ba9757-9076-4822-ab9e-11135834d1dd_1729801544.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
license_link: https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE
language:
- multilingual
library_name: transformers
license: mit
tags:
- unsloth
- transformers
- phi3
- phi
---
# Finetune Phi-3.5, Llama 3.1, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Phi-3.5 (mini) here: https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to Microsoft AI and Phi team for creating and releasing these models.
## Model Summary
Phi-3.5-mini is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data. The model belongs to the Phi-3 model family and supports 128K token context length. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures.
🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br>
📰 [Phi-3 Microsoft Blog](https://aka.ms/phi3.5-techblog) <br>
📖 [Phi-3 Technical Report](https://arxiv.org/abs/2404.14219) <br>
👩🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br>
🖥️ [Try It](https://aka.ms/try-phi3.5mini) <br>
**Phi-3.5**: [[mini-instruct]](https://huggingface.co/microsoft/Phi-3.5-mini-instruct); [[MoE-instruct]](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct) ; [[vision-instruct]](https://huggingface.co/microsoft/Phi-3.5-vision-instruct)
## Intended Uses
### Primary Use Cases
The model is intended for commercial and research use in multiple languages. The model provides uses for general purpose AI systems and applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
### Use Case Considerations
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.***
## Release Notes
This is an update over the June 2024 instruction-tuned Phi-3 Mini release based on valuable user feedback. The model used additional post-training data leading to substantial gains on multilingual, multi-turn conversation quality, and reasoning capability. We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications. We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community.
### Multilingual
The table below highlights multilingual capability of the Phi-3.5 Mini on multilingual MMLU, MEGA, and multilingual MMLU-pro datasets. Overall, we observed that even with just 3.8B active parameters, the model is competitive on multilingual tasks in comparison to other models with a much bigger active parameters.
| Benchmark | Phi-3.5 Mini-Ins | Phi-3.1-Mini-128K-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------------------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Multilingual MMLU | 55.4 | 51.08 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 |
| Multilingual MMLU-Pro | 30.9 | 30.21 | 15.0 | 34.0 | 21.4 | 43.0 | 57.9 | 53.2 |
| MGSM | 47.9 | 41.56 | 31.8 | 63.3 | 56.7 | 75.1 | 75.8 | 81.7 |
| MEGA MLQA | 61.7 | 55.5 | 43.9 | 61.2 | 45.2 | 54.4 | 61.6 | 70.0 |
| MEGA TyDi QA | 62.2 | 55.9 | 54.0 | 63.7 | 54.5 | 65.6 | 63.6 | 81.8 |
| MEGA UDPOS | 46.5 | 48.1 | 57.2 | 58.2 | 54.1 | 56.6 | 62.4 | 66.0 |
| MEGA XCOPA | 63.1 | 62.4 | 58.8 | 10.8 | 21.1 | 31.2 | 95.0 | 90.3 |
| MEGA XStoryCloze | 73.5 | 73.6 | 75.5 | 92.3 | 71.0 | 87.0 | 20.7 | 96.6 |
| **Average** | **55.2** | **52.3** | **47.9** | **55.3** | **47.5** | **59.6** | **64.3** | **76.6** |
The table below shows Multilingual MMLU scores in some of the supported languages. For more multi-lingual benchmarks and details, see [Appendix A](#appendix-a).
| Benchmark | Phi-3.5 Mini-Ins | Phi-3.1-Mini-128K-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 44.2 | 35.4 | 33.7 | 45.3 | 49.1 | 56.3 | 73.6 | 67.1 |
| Chinese | 52.6 | 46.9 | 45.9 | 58.2 | 54.4 | 62.7 | 66.7 | 70.8 |
| Dutch | 57.7 | 48.0 | 51.3 | 60.1 | 55.9 | 66.7 | 80.6 | 74.2 |
| French | 61.1 | 61.7 | 53.0 | 63.8 | 62.8 | 67.0 | 82.9 | 75.6 |
| German | 62.4 | 61.3 | 50.1 | 64.5 | 59.9 | 65.7 | 79.5 | 74.3 |
| Italian | 62.8 | 63.1 | 52.5 | 64.1 | 55.9 | 65.7 | 82.6 | 75.9 |
| Russian | 50.4 | 45.3 | 48.9 | 59.0 | 57.4 | 63.2 | 78.7 | 72.6 |
| Spanish | 62.6 | 61.3 | 53.9 | 64.3 | 62.6 | 66.0 | 80.0 | 75.5 |
| Ukrainian | 45.2 | 36.7 | 46.9 | 56.6 | 52.9 | 62.0 | 77.4 | 72.6 |
### Long Context
Phi-3.5-mini supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document QA, long document information retrieval. We see that Phi-3.5-mini is clearly better than Gemma-2 family which only supports 8K context length. Phi-3.5-mini is competitive with other much larger open-weight models such as Llama-3.1-8B-instruct, Mistral-7B-instruct-v0.3, and Mistral-Nemo-12B-instruct-2407.
| Benchmark | Phi-3.5-mini-instruct | Llama-3.1-8B-instruct | Mistral-7B-instruct-v0.3 | Mistral-Nemo-12B-instruct-2407 | Gemini-1.5-Flash | GPT-4o-mini-2024-07-18 (Chat) |
|--|--|--|--|--|--|--|
| GovReport | 25.9 | 25.1 | 26.0 | 25.6 | 27.8 | 24.8 |
| QMSum | 21.3 | 21.6 | 21.3 | 22.1 | 24.0 | 21.7 |
| Qasper | 41.9 | 37.2 | 31.4 | 30.7 | 43.5 | 39.8 |
| SQuALITY | 25.3 | 26.2 | 25.9 | 25.8 | 23.5 | 23.8 |
| SummScreenFD | 16.0 | 17.6 | 17.5 | 18.2 | 16.3 | 17.0 |
| **Average** | **26.1** | **25.5** | **24.4** | **24.5** | **27.0** | **25.4** |
RULER: a retrieval-based benchmark for long context understanding
| Model | 4K | 8K | 16K | 32K | 64K | 128K | Average |
|--|--|--|--|--|--|--|--|
| **Phi-3.5-mini-instruct** | 94.3 | 91.1 | 90.7 | 87.1 | 78.0 | 63.6 | **84.1** |
| **Llama-3.1-8B-instruct** | 95.5 | 93.8 | 91.6 | 87.4 | 84.7 | 77.0 | **88.3** |
| **Mistral-Nemo-12B-instruct-2407** | 87.8 | 87.2 | 87.7 | 69.0 | 46.8 | 19.0 | **66.2** |
RepoQA: a benchmark for long context code understanding
| Model | Python | C++ | Rust | Java | TypeScript | Average |
|--|--|--|--|--|--|--|
| **Phi-3.5-mini-instruct** | 86 | 67 | 73 | 77 | 82 | **77** |
| **Llama-3.1-8B-instruct** | 80 | 65 | 73 | 76 | 63 | **71** |
| **Mistral-7B-instruct-v0.3** | 61 | 57 | 51 | 61 | 80 | **62** |
## Usage
### Requirements
Phi-3 family has been integrated in the `4.43.0` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`.
Examples of required packages:
```
flash_attn==2.5.8
torch==2.3.1
accelerate==0.31.0
transformers==4.43.0
```
Phi-3.5-mini-instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3.5mini)
### Tokenizer
Phi-3.5-mini-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Input Formats
Given the nature of the training data, the Phi-3.5-mini-instruct model is best suited for prompts using the chat format as follows:
```
<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
### Loading the model locally
After obtaining the Phi-3.5-mini-instruct model checkpoint, users can use this sample code for inference.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3.5-mini-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-mini-instruct")
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
Notes: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_
## Responsible AI Considerations
Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: The Phi models are trained primarily on English text and some additional multilingual text. Languages other than English will experience worse performance as well as performance disparities across non-English. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 3 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
+ Long Conversation: Phi-3 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift
Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi-3 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
**Architecture:** Phi-3.5-mini has 3.8B parameters and is a dense decoder-only Transformer model using the same tokenizer as Phi-3 Mini.<br>
**Inputs:** Text. It is best suited for prompts using chat format.<br>
**Context length:** 128K tokens<br>
**GPUs:** 512 H100-80G<br>
**Training time:** 10 days<br>
**Training data:** 3.4T tokens<br>
**Outputs:** Generated text in response to the input<br>
**Dates:** Trained between June and August 2024<br>
**Status:** This is a static model trained on an offline dataset with cutoff date October 2023 for publicly available data. Future versions of the tuned models may be released as we improve models.<br>
**Supported languages:** Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian<br>
**Release date:** August 2024<br>
### Training Datasets
Our training data includes a wide variety of sources, totaling 3.4 trillion tokens, and is a combination of
1) publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://arxiv.org/pdf/2404.14219).
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results under completion format for Phi-3.5-mini on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mistral-7B-Instruct-v0.3, Mistral-Nemo-12B-Ins-2407, Llama-3.1-8B-Ins, Gemma-2-9B-Ins, Gemini 1.5 Flash, and GPT-4o-mini-2024-07-18 (Chat).
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark. At the high-level overview of the model quality on representative benchmarks:
| Category | Benchmark | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------|--------------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|------------------------------|
| Popular aggregated benchmark | Arena Hard | 37 | 18.1 | 39.4 | 25.7 | 42 | 55.2 | 75 |
| | BigBench Hard CoT (0-shot) | 69 | 33.4 | 60.2 | 63.4 | 63.5 | 66.7 | 80.4 |
| | MMLU (5-shot) | 69 | 60.3 | 67.2 | 68.1 | 71.3 | 78.7 | 77.2 |
| | MMLU-Pro (0-shot, CoT) | 47.4 | 18 | 40.7 | 44 | 50.1 | 57.2 | 62.8 |
| Reasoning | ARC Challenge (10-shot) | 84.6 | 77.9 | 84.8 | 83.1 | 89.8 | 92.8 | 93.5 |
| | BoolQ (2-shot) | 78 | 80.5 | 82.5 | 82.8 | 85.7 | 85.8 | 88.7 |
| | GPQA (0-shot, CoT) | 30.4 | 15.6 | 28.6 | 26.3 | 29.2 | 37.5 | 41.1 |
| | HellaSwag (5-shot) | 69.4 | 71.6 | 76.7 | 73.5 | 80.9 | 67.5 | 87.1 |
| | OpenBookQA (10-shot) | 79.2 | 78 | 84.4 | 84.8 | 89.6 | 89 | 90 |
| | PIQA (5-shot) | 81 | 73.4 | 83.5 | 81.2 | 83.7 | 87.5 | 88.7 |
| | Social IQA (5-shot) | 74.7 | 73 | 75.3 | 71.8 | 74.7 | 77.8 | 82.9 |
| | TruthfulQA (MC2) (10-shot) | 64 | 64.7 | 68.1 | 69.2 | 76.6 | 76.6 | 78.2 |
| | WinoGrande (5-shot) | 68.5 | 58.1 | 70.4 | 64.7 | 74 | 74.7 | 76.9 |
| Multilingual | Multilingual MMLU (5-shot) | 55.4 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 |
| | MGSM (0-shot CoT) | 47.9 | 31.8 | 63.3 | 56.7 | 76.4 | 75.8 | 81.7 |
| Math | GSM8K (8-shot, CoT) | 86.2 | 54.4 | 84.2 | 82.4 | 84.9 | 82.4 | 91.3 |
| | MATH (0-shot, CoT) | 48.5 | 19 | 31.2 | 47.6 | 50.9 | 38 | 70.2 |
| Long context | Qasper | 41.9 | 31.4 | 30.7 | 37.2 | 13.9 | 43.5 | 39.8 |
| | SQuALITY | 24.3 | 25.9 | 25.8 | 26.2 | 0 | 23.5 | 23.8 |
| Code Generation| HumanEval (0-shot) | 62.8 | 35.4 | 63.4 | 66.5 | 61 | 74.4 | 86.6 |
| | MBPP (3-shot) | 69.6 | 50.4 | 68.1 | 69.4 | 69.3 | 77.5 | 84.1 |
| **Average** | | **61.4** | **48.5** | **61.3** | **61.0** | **63.3** | **68.5** | **74.9** |
We take a closer look at different categories across public benchmark datasets at the table below:
| Category | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|------------------------------|
| Popular aggregated benchmark | 55.6 | 32.5 | 51.9 | 50.3 | 56.7 | 64.5 | 73.9 |
| Reasoning | 70.1 | 65.2 | 72.2 | 70.5 | 75.4 | 77.7 | 80 |
| Language understanding | 62.6 | 62.8 | 67 | 62.9 | 72.8 | 66.6 | 76.8 |
| Robustness | 59.7 | 53.4 | 65.2 | 59.8 | 64.7 | 68.9 | 77.5 |
| Long context | 26.1 | 25.5 | 24.4 | 24.5 | 0 | 27 | 25.4 |
| Math | 67.4 | 36.7 | 57.7 | 65 | 67.9 | 60.2 | 80.8 |
| Code generation | 62 | 43.1 | 56.9 | 65.8 | 58.3 | 66.8 | 69.9 |
| Multilingual | 55.2 | 47.9 | 55.3 | 47.5 | 59.6 | 64.3 | 76.6 |
Overall, the model with only 3.8B-param achieves a similar level of multilingual language understanding and reasoning ability as much larger models.
However, it is still fundamentally limited by its size for certain tasks.
The model simply does not have the capacity to store too much factual knowledge, therefore, users may experience factual incorrectness.
However, we believe such weakness can be resolved by augmenting Phi-3.5 with a search engine, particularly when using the model under RAG settings.
## Safety Evaluation and Red-Teaming
We leveraged various evaluation techniques including red teaming, adversarial conversation simulations, and multilingual safety evaluation benchmark datasets to
evaluate Phi-3.5 models' propensity to produce undesirable outputs across multiple languages and risk categories.
Several approaches were used to compensate for the limitations of one approach alone. Findings across the various evaluation methods indicate that safety
post-training that was done as detailed in the [Phi-3 Safety Post-Training paper](https://arxiv.org/pdf/2407.13833) had a positive impact across multiple languages and risk categories as observed by
refusal rates (refusal to output undesirable outputs) and robustness to jailbreak techniques. Note, however, while comprehensive red team evaluations were conducted
across all models in the prior release of Phi models, red teaming was largely focused on Phi-3.5 MOE across multiple languages and risk categories for this release as
it is the largest and more capable model of the three models. Details on prior red team evaluations across Phi models can be found in the [Phi-3 Safety Post-Training paper](https://arxiv.org/pdf/2407.13833).
For this release, insights from red teaming indicate that the models may refuse to generate undesirable outputs in English, even when the request for undesirable output
is in another language. Models may also be more susceptible to longer multi-turn jailbreak techniques across both English and non-English languages. These findings
highlight the need for industry-wide investment in the development of high-quality safety evaluation datasets across multiple languages, including low resource languages,
and risk areas that account for cultural nuances where those languages are spoken.
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3.5-mini-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
## License
The model is licensed under the [MIT license](./LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
## Appendix A
#### MGSM
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|------------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| German | 69.6 | 65.2 | 42.4 | 74.4 | 68.4 | 76.8 | 81.6 | 82.8 |
| English | 85.2 | 83.2 | 60.0 | 86.0 | 81.2 | 88.8 | 90.8 | 90.8 |
| Spanish | 79.2 | 77.6 | 46.4 | 75.6 | 66.4 | 82.4 | 84.8 | 86.8 |
| French | 71.6 | 72.8 | 47.2 | 70.4 | 66.8 | 74.4 | 77.2 | 81.6 |
| Japanese | 50.0 | 35.2 | 22.8 | 62.4 | 49.2 | 67.6 | 77.6 | 80.4 |
| Russian | 67.2 | 51.6 | 43.2 | 73.6 | 67.2 | 78.4 | 84.8 | 86.4 |
| Thai | 29.6 | 6.4 | 18.4 | 53.2 | 56.0 | 76.8 | 87.6 | 81.6 |
| Chinese | 60.0 | 52.8 | 42.4 | 66.4 | 68.0 | 72.8 | 82.0 | 82.0 |
#### Multilingual MMLU-pro
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|------------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Czech | 24.9 | 26.3 | 14.6 | 30.6 | 23.0 | 40.5 | 59.0 | 40.9 |
| English | 47.7 | 46.2 | 17.7 | 39.8 | 43.1 | 49.0 | 66.1 | 62.7 |
| Finnish | 22.3 | 20.5 | 11.5 | 30.4 | 9.7 | 37.5 | 54.5 | 50.1 |
| Norwegian | 29.9 | 27.8 | 14.4 | 33.2 | 22.2 | 44.4 | 60.7 | 59.1 |
| Polish | 25.7 | 26.4 | 16.3 | 33.6 | 9.2 | 41.7 | 53.9 | 42.8 |
| Portuguese | 38.7 | 37.6 | 15.3 | 36.0 | 29.3 | 43.5 | 54.0 | 56.9 |
| Swedish | 30.7 | 28.1 | 15.5 | 34.3 | 16.9 | 42.6 | 57.7 | 55.5 |
#### MEGA
##### MLQA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 54.3 | 32.7 | 23.5 | 31.4 | 31.5 | 57.4 | 63.8 | 64.0 |
| Chinese | 36.1 | 31.8 | 22.4 | 27.4 | 18.6 | 45.4 | 38.1 | 38.9 |
| English | 80.3 | 78.9 | 68.2 | 75.5 | 67.2 | 82.9 | 69.5 | 82.2 |
| German | 61.8 | 59.1 | 49.0 | 57.8 | 38.9 | 63.8 | 55.9 | 64.1 |
| Spanish | 68.8 | 67.0 | 50.3 | 63.6 | 52.7 | 72.8 | 59.6 | 70.1 |
##### TyDi QA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 69.7 | 54.4 | 52.5 | 49.8 | 33.7 | 81.1 | 78.8 | 84.9 |
| English | 82.0 | 82.0 | 60.5 | 77.3 | 65.1 | 82.4 | 60.9 | 81.8 |
| Finnish | 70.3 | 64.3 | 68.6 | 57.1 | 74.4 | 85.7 | 73.5 | 84.8 |
| Japanese | 65.4 | 56.7 | 45.3 | 54.8 | 34.1 | 74.6 | 59.7 | 73.3 |
| Korean | 74.0 | 60.4 | 54.5 | 54.2 | 54.9 | 83.8 | 60.7 | 82.3 |
| Russian | 63.5 | 62.7 | 52.3 | 55.7 | 27.4 | 69.8 | 60.1 | 72.5 |
| Thai | 64.4 | 49.0 | 51.8 | 43.5 | 48.5 | 81.4 | 71.6 | 78.2 |
##### XCOPA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| English | 94.6 | 94.6 | 85.6 | 94.4 | 37.6 | 63.8 | 92.0 | 98.2 |
| Italian | 86.8 | 84.8 | 76.8 | 83.2 | 16.2 | 37.2 | 85.6 | 97.6 |
| Turkish | 58.6 | 57.2 | 61.6 | 56.6 | 38.4 | 60.2 | 91.4 | 94.6 |
|
bowilleatyou/03f6ae10-cb5d-4f91-896e-d13d8c3d1e86 | bowilleatyou | 2025-04-02T12:01:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T08:44:25Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OnixThailand/OnixThailand | OnixThailand | 2025-04-02T11:59:07Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-02T11:58:11Z | ---
license: apache-2.0
---
Onix คืออะไร?
Onix ยา คือแคปซูลลดน้ำหนักที่คิดค้นขึ้นตามหลักวิทยาศาสตร์ ออกแบบมาเพื่อสนับสนุนการเผาผลาญไขมันตามธรรมชาติและการควบคุมน้ำหนักอย่างยั่งยืน ช่วยให้บุคคลต่างๆ บรรลุเป้าหมายด้านฟิตเนสได้โดยการเพิ่มการเผาผลาญ เพิ่มระดับพลังงาน และส่งเสริมการเผาผลาญแคลอรีอย่างมีประสิทธิภาพ ซึ่งแตกต่างจากโปรแกรมควบคุมอาหารแบบสุดโต่งหรือการออกกำลังกายที่เข้มข้น Onix แคปซูล ทำงานร่วมกับการทำงานตามธรรมชาติของร่างกายเพื่ออำนวยความสะดวกในการลดน้ำหนักอย่างค่อยเป็นค่อยไปและยาวนาน ซึ่งทำให้เป็นทางออกที่เหมาะสำหรับผู้ที่ต้องการมีรูปร่างที่แข็งแรงและเพรียวบางโดยไม่ต้องเปลี่ยนแปลงวิถีชีวิตอย่างรุนแรง Onix ค่าใช้จ่าย
เว็บไซต์อย่างเป็นทางการ:<a href="https://www.nutritionsee.com/onihailand">www.Onix.com</a>
<p><a href="https://www.nutritionsee.com/onihailand"> <img src="https://www.nutritionsee.com/wp-content/uploads/2025/04/Onix-Thailand.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/onihailand">ซื้อเลย!! คลิกลิงก์ด้านล่างเพื่อดูรายละเอียดเพิ่มเติม และรับส่วนลด 50% ทันที... รีบเลย</a>
เว็บไซต์อย่างเป็นทางการ:<a href="https://www.nutritionsee.com/onihailand">www.Onix.com</a> |
MaestrAI/character-lora-1743594612 | MaestrAI | 2025-04-02T11:54:15Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-02T11:50:11Z | # character LORA Model
This is a LORA model for character character
Created at 2025-04-02 13:50:13
|
gavrilstep/s801 | gavrilstep | 2025-04-02T11:54:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T11:50:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Apel-sin/yandexGPT-5-Lite-8B-instruct-exl2 | Apel-sin | 2025-04-02T11:46:27Z | 0 | 0 | null | [
"ru",
"en",
"base_model:yandex/YandexGPT-5-Lite-8B-instruct",
"base_model:finetune:yandex/YandexGPT-5-Lite-8B-instruct",
"license:other",
"region:us"
]
| null | 2025-03-31T13:38:29Z | ---
license: other
license_name: yandexgpt-5-lite-8b
license_link: LICENSE
language:
- ru
- en
base_model:
- yandex/YandexGPT-5-Lite-8B-instruct
---
# YandexGPT-5-Lite-Instruct
Instruct-версия большой языковой модели YandexGPT 5 Lite на 8B параметров с длиной контекста 32k токенов. Также в отдельном [репозитории](https://huggingface.co/yandex/YandexGPT-5-Lite-8B-instruct-GGUF) опубликована квантизованная версия модели в формате GGUF.
Обучена на базе [YandexGPT 5 Lite Pretrain](https://huggingface.co/yandex/YandexGPT-5-Lite-8B-pretrain), без использования весов каких-либо сторонних моделей. Алайнмент Lite-версии совпадает с алайнментом YandexGPT 5 Pro и состоит из этапов SFT и RLHF (более подробно о них — в [статье](https://habr.com/ru/companies/yandex/articles/885218/) на Хабре).
Задавайте вопросы в discussions.
## Бенчмарки
По результатам международных бенчмарков и их адаптаций для русского языка, YandexGPT 5 Lite вплотную приблизилась к аналогам (Llama-3.1-8B-instruct и Qwen-2.5-7B-instruct) и превосходит их в ряде сценариев, в том числе — в знании русской культуры и фактов.
<img src="https://habrastorage.org/r/w1560/getpro/habr/upload_files/6b5/eb4/9ea/6b5eb49ea757bc124c938717b21f1cf7.png" alt="Таблица бенчмарков" width="100%"/>
MMLU — 5-shot, все остальные бенчмарки — 0-shot.
## Как использовать
Модель можно запустить через HF Transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_NAME = "yandex/YandexGPT-5-Lite-8B-instruct"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map="cuda",
torch_dtype="auto",
)
messages = [{"role": "user", "content": "Для чего нужна токенизация?"}]
input_ids = tokenizer.apply_chat_template(
messages, tokenize=True, return_tensors="pt"
).to("cuda")
outputs = model.generate(input_ids, max_new_tokens=1024)
print(tokenizer.decode(outputs[0][input_ids.size(1) :], skip_special_tokens=True))
```
Или через vLLM:
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
MODEL_NAME = "yandex/YandexGPT-5-Lite-8B-instruct"
sampling_params = SamplingParams(
temperature=0.3,
top_p=0.9,
max_tokens=1024,
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
llm = LLM(
MODEL_NAME,
tensor_parallel_size=1,
)
messages = [{"role": "user", "content": "В чем смысл жизни?"}]
input_ids = tokenizer.apply_chat_template(
messages, tokenize=True, add_generation_prompt=True
)[1:] # remove bos
text = tokenizer.decode(input_ids)
outputs = llm.generate(text, use_tqdm=False, sampling_params=sampling_params)
print(tokenizer.decode(outputs[0].outputs[0].token_ids, skip_special_tokens=True))
```
Для запуска в llama.cpp и ollama можно воспользоваться нашей квантизованной моделью, которая выложена в репозитории [YandexGPT-5-Lite-8B-instruct-GGUF](https://huggingface.co/yandex/YandexGPT-5-Lite-8B-instruct-GGUF).
## Особенности токенизации
Для полного соответствия токенизации мы рекомендуем пользоваться оригинальным [sentencepiece](https://github.com/google/sentencepiece) — файл токенизатора лежит в папке `original_tokenizer`. В нашей инфраструктуре каждую реплику диалога мы токенизируем отдельно.
Из-за этого, в частности, появляется пробел в начале каждой реплики. Также `\n` токены мы заменяем на `[NL]`, это можно сделать с помощью `text.replace("\n", "[NL]")` перед токенизацией.
## Особенности шаблона
Мы используем нестандартный шаблон диалога — модель обучена генерировать только одну реплику после последовательности `Ассистент:[SEP]`, завершая её токеном `</s>`. При этом диалог в промпте может быть любой длины.
Это приводит к тому, что в интерактивном режиме модель может выдавать результаты, отличающиеся от вызова модели в режиме генерации на фиксированном диалоге. Поэтому мы рекомендуем использовать интерактивный режим только для ознакомления с моделью. |
procit007/Classification_Model_v0.0.13 | procit007 | 2025-04-02T11:43:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-02T11:42:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zyl2023/Qwen2.5-1.5B-Open-R1-Distill | zyl2023 | 2025-04-02T11:42:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T08:33:38Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-Distill
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-1.5B-Open-R1-Distill
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="zyl2023/Qwen2.5-1.5B-Open-R1-Distill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zhaoyunlong2020/huggingface/runs/k8apdt9n)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.5.1+cu121
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ivangrapher/s801 | ivangrapher | 2025-04-02T11:40:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T11:36:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlfoundations-dev/3k_globalbatchsize128_lr4e5_epochs3 | mlfoundations-dev | 2025-04-02T11:36:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T09:01:09Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: 3k_globalbatchsize128_lr4e5_epochs3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 3k_globalbatchsize128_lr4e5_epochs3
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/openthoughts_3000 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
MaestrAI/character-lora-1743593495 | MaestrAI | 2025-04-02T11:36:18Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-02T11:31:34Z | # character LORA Model
This is a LORA model for character character
Created at 2025-04-02 13:31:49
|
TOMFORD79/ben9 | TOMFORD79 | 2025-04-02T11:35:30Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
]
| any-to-any | 2025-04-02T11:25:17Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mlfoundations-dev/3k_globalbatchsize96_lr4e5_epochs3 | mlfoundations-dev | 2025-04-02T11:35:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T08:58:32Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: 3k_globalbatchsize96_lr4e5_epochs3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 3k_globalbatchsize96_lr4e5_epochs3
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/openthoughts_3000 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 12
- total_train_batch_size: 96
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Amigoo/Princess | Amigoo | 2025-04-02T11:32:18Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-04-02T11:01:41Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: princess
---
# Princess
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `princess` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "princess",
"lora_weights": "https://huggingface.co/Amigoo/Princess/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Amigoo/Princess', weight_name='lora.safetensors')
image = pipeline('princess').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Amigoo/Princess/discussions) to add images that show off what you’ve made with this LoRA.
|
Hosseinka/qwen-lr1e-4-r4-a16 | Hosseinka | 2025-04-02T11:30:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T05:54:54Z | ---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
model_name: qwen-lr1e-4-r4-a16
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen-lr1e-4-r4-a16
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Hosseinka/qwen-lr1e-4-r4-a16", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hosseinksh/qwen-lr1e-4-r4-a16/runs/5cwvk4s3)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.3
- Pytorch: 2.4.1+cu121
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
FBIKKIBF/lora-with-text-encoder-medium-r256-1-2500 | FBIKKIBF | 2025-04-02T11:30:22Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3",
"sd3-diffusers",
"base_model:stabilityai/stable-diffusion-3.5-medium",
"base_model:adapter:stabilityai/stable-diffusion-3.5-medium",
"license:other",
"region:us"
]
| text-to-image | 2025-04-02T10:50:39Z | ---
base_model: stabilityai/stable-diffusion-3.5-medium
library_name: diffusers
license: other
instance_prompt: 4-years-old girl xingchen
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - FBIKKIBF/lora-with-text-encoder-medium-r256-1-2500
<Gallery />
## Model description
These are FBIKKIBF/lora-with-text-encoder-medium-r256-1-2500 DreamBooth LoRA weights for stabilityai/stable-diffusion-3.5-medium.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? True.
## Trigger words
You should use `4-years-old girl xingchen` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](FBIKKIBF/lora-with-text-encoder-medium-r256-1-2500/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3.5-medium, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('FBIKKIBF/lora-with-text-encoder-medium-r256-1-2500', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('4-years-old girl xingchen').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/FBIKKIBF/lora-with-text-encoder-medium-r256-1-2500/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
JadenLong/MutBERT | JadenLong | 2025-04-02T11:30:04Z | 21 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"biology",
"Feature Extraction",
"bioRxiv 2025.01.23.634452",
"custom_code",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-14T08:59:02Z | ---
license: mit
tags:
- biology
- transformers
- Feature Extraction
- bioRxiv 2025.01.23.634452
---
**This is repository for MutBERT (pretrained with mutation data in human genome)**.
**You can find all MutBERT variants at [here](https://huggingface.co/JadenLong).**
## Introduction
This is the official pre-trained model introduced in MutBERT: Probabilistic Genome Representation Improves Genomics Foundation Models.
We sincerely appreciate the Tochka-Al team for the ruRoPEBert implementation, which serves as the base of MutBERT development.
MutBERT is a transformer-based genome foundation model trained only on Human genome.
## Model Source
- Repository: [MutBERT](https://github.com/ai4nucleome/mutBERT)
- Paper: [MutBERT: Probabilistic Genome Representation Improves Genomics Foundation Models](https://www.biorxiv.org/content/10.1101/2025.01.23.634452v1)
## Usage
### Load tokenizer and model
```python
from transformers import AutoTokenizer, AutoModel
model_name = "JadenLong/MutBERT"
# Optional: JadenLong/MutBERT-Huamn-Ref, JadenLong/MutBERT-Multi
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
```
The default attention is flash attention("sdpa"). If you want use basic attention, you can replace it with "eager". Please refer to [here](https://huggingface.co/JadenLong/MutBERT/blob/main/modeling_mutbert.py#L438).
### Get embeddings
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
model_name = "JadenLong/MutBERT"
# Optional: JadenLong/MutBERT-Huamn-Ref, JadenLong/MutBERT-Multi
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
dna = "ATCGGGGCCCATTA"
inputs = tokenizer(dna, return_tensors='pt')["input_ids"]
mut_inputs = F.one_hot(inputs, num_classes=len(tokenizer)).float().to("cpu") # len(tokenizer) is vocab size
last_hidden_state = model(mut_inputs).last_hidden_state # [1, sequence_length, 768]
# or: last_hidden_state = model(mut_inputs)[0] # [1, sequence_length, 768]
# embedding with mean pooling
embedding_mean = torch.mean(last_hidden_state[0], dim=0)
print(embedding_mean.shape) # expect to be 768
# embedding with max pooling
embedding_max = torch.max(last_hidden_state[0], dim=0)[0]
print(embedding_max.shape) # expect to be 768
```
### Using as a Classifier
```python
from transformers import AutoModelForSequenceClassification
model_name = "JadenLong/MutBERT"
# Optional: JadenLong/MutBERT-Huamn-Ref, JadenLong/MutBERT-Multi
model = AutoModelForSequenceClassification.from_pretrained(model_name, trust_remote_code=True, num_labels=2)
```
### With RoPE scaling
Allowed types for RoPE scaling are: `linear` and `dynamic`. To extend the model's context window you need to add rope_scaling parameter.
If you want to scale your model context by 2x:
```python
model_name = "JadenLong/MutBERT"
# Optional: JadenLong/MutBERT-Huamn-Ref, JadenLong/MutBERT-Multi
model = AutoModel.from_pretrained(model_name,
trust_remote_code=True,
rope_scaling={'type': 'dynamic','factor': 2.0}
) # 2.0 for x2 scaling, 4.0 for x4, etc..
```
|
Zaynoid/nem-dpo-x | Zaynoid | 2025-04-02T11:29:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Zaynoid/lam70-v2-sl",
"base_model:finetune:Zaynoid/lam70-v2-sl",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T11:02:04Z | ---
base_model:
- Zaynoid/Llama-70b-v2-sl
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* /home/ubuntu/llama-1/merges/merge-slerp
* [Zaynoid/Llama-70b-v2-sl](https://huggingface.co/Zaynoid/Llama-70b-v2-sl)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Zaynoid/Llama-70b-v2-sl
layer_range: [0, 80]
- model: "/home/ubuntu/llama-1/merges/merge-slerp"
layer_range: [0, 80]
merge_method: slerp
base_model: Zaynoid/Llama-70b-v2-sl
tokenizer_source: Zaynoid/Llama-70b-v2-sl
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
navaneeth45/SmolLM | navaneeth45 | 2025-04-02T11:28:47Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:Sujithanumala/SmolLM",
"base_model:finetune:Sujithanumala/SmolLM",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-13T05:42:34Z | ---
library_name: transformers
base_model: Sujithanumala/SmolLM
tags:
- generated_from_trainer
model-index:
- name: SmolLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SmolLM
This model is a fine-tuned version of [Sujithanumala/SmolLM](https://huggingface.co/Sujithanumala/SmolLM) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 11
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 40
- total_train_batch_size: 440
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 50000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Offbeat19/LRDefender | Offbeat19 | 2025-04-02T11:28:02Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-04-02T11:04:12Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LRDefender
---
# Lrdefender
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LRDefender` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LRDefender",
"lora_weights": "https://huggingface.co/Offbeat19/LRDefender/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Offbeat19/LRDefender', weight_name='lora.safetensors')
image = pipeline('LRDefender').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Offbeat19/LRDefender/discussions) to add images that show off what you’ve made with this LoRA.
|
prathamverma/mistral-7b-openorca-cot-4bit-merged_latest11 | prathamverma | 2025-04-02T11:27:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-04-02T11:23:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso12/2932153a-da59-498b-b2d1-f0472f113ab2 | lesso12 | 2025-04-02T11:27:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-128k",
"region:us"
]
| null | 2025-04-02T09:25:03Z | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2932153a-da59-498b-b2d1-f0472f113ab2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c2014528464e8248_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c2014528464e8248_train_data.json
type:
field_input: span_labels
field_instruction: source_text
field_output: target_text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso12/2932153a-da59-498b-b2d1-f0472f113ab2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000212
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/c2014528464e8248_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 120
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2861971d-7ece-45a2-a786-4df464594654
wandb_project: 12a
wandb_run: your_name
wandb_runid: 2861971d-7ece-45a2-a786-4df464594654
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2932153a-da59-498b-b2d1-f0472f113ab2
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000212
- train_batch_size: 4
- eval_batch_size: 4
- seed: 120
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 0.6104 |
| 0.0078 | 0.0809 | 500 | 0.0014 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
NyeLow/papucotto | NyeLow | 2025-04-02T11:26:52Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-04-02T11:26:51Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: PAPUCOTTO
---
# Papucotto
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `PAPUCOTTO` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "PAPUCOTTO",
"lora_weights": "https://huggingface.co/NyeLow/papucotto/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('NyeLow/papucotto', weight_name='lora.safetensors')
image = pipeline('PAPUCOTTO').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/NyeLow/papucotto/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/sarashina2-8x70b-GGUF | mradermacher | 2025-04-02T11:26:32Z | 0 | 0 | transformers | [
"transformers",
"ja",
"en",
"base_model:sbintuitions/sarashina2-8x70b",
"base_model:finetune:sbintuitions/sarashina2-8x70b",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-20T12:58:34Z | ---
base_model: sbintuitions/sarashina2-8x70b
language:
- ja
- en
library_name: transformers
license: other
license_link: LICENSE
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/sbintuitions/sarashina2-8x70b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q2_K.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q2_K.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q2_K.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q2_K.gguf.part4of4) | Q2_K | 171.2 | |
| [P1](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q3_K_S.gguf.part1of5) [P2](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q3_K_S.gguf.part2of5) [P3](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q3_K_S.gguf.part3of5) [P4](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q3_K_S.gguf.part4of5) [P5](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q3_K_S.gguf.part5of5) | Q3_K_S | 202.4 | |
| [P1](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q3_K_M.gguf.part1of5) [P2](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q3_K_M.gguf.part2of5) [P3](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q3_K_M.gguf.part3of5) [P4](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q3_K_M.gguf.part4of5) [P5](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q3_K_M.gguf.part5of5) | Q3_K_M | 223.5 | lower quality |
| [P1](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q3_K_L.gguf.part1of5) [P2](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q3_K_L.gguf.part2of5) [P3](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q3_K_L.gguf.part3of5) [P4](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q3_K_L.gguf.part4of5) [P5](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q3_K_L.gguf.part5of5) | Q3_K_L | 239.7 | |
| [P1](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.IQ4_XS.gguf.part1of6) [P2](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.IQ4_XS.gguf.part2of6) [P3](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.IQ4_XS.gguf.part3of6) [P4](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.IQ4_XS.gguf.part4of6) [P5](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.IQ4_XS.gguf.part5of6) [P6](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.IQ4_XS.gguf.part6of6) | IQ4_XS | 251.7 | |
| [P1](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q4_K_S.gguf.part1of6) [P2](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q4_K_S.gguf.part2of6) [P3](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q4_K_S.gguf.part3of6) [P4](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q4_K_S.gguf.part4of6) [P5](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q4_K_S.gguf.part5of6) [P6](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q4_K_S.gguf.part6of6) | Q4_K_S | 265.4 | fast, recommended |
| [P1](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q4_K_M.gguf.part1of6) [P2](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q4_K_M.gguf.part2of6) [P3](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q4_K_M.gguf.part3of6) [P4](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q4_K_M.gguf.part4of6) [P5](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q4_K_M.gguf.part5of6) [P6](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q4_K_M.gguf.part6of6) | Q4_K_M | 282.5 | fast, recommended |
| [P1](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q5_K_S.gguf.part1of7) [P2](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q5_K_S.gguf.part2of7) [P3](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q5_K_S.gguf.part3of7) [P4](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q5_K_S.gguf.part4of7) [P5](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q5_K_S.gguf.part5of7) [P6](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q5_K_S.gguf.part6of7) [P7](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q5_K_S.gguf.part7of7) | Q5_K_S | 320.2 | |
| [P1](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q5_K_M.gguf.part1of7) [P2](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q5_K_M.gguf.part2of7) [P3](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q5_K_M.gguf.part3of7) [P4](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q5_K_M.gguf.part4of7) [P5](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q5_K_M.gguf.part5of7) [P6](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q5_K_M.gguf.part6of7) [P7](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q5_K_M.gguf.part7of7) | Q5_K_M | 330.2 | |
| [P1](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q6_K.gguf.part1of8) [P2](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q6_K.gguf.part2of8) [P3](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q6_K.gguf.part3of8) [P4](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q6_K.gguf.part4of8) [P5](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q6_K.gguf.part5of8) [P6](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q6_K.gguf.part6of8) [P7](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q6_K.gguf.part7of8) [P8](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q6_K.gguf.part8of8) | Q6_K | 381.7 | very good quality |
| [P1](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q8_0.gguf.part01of10) [P2](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q8_0.gguf.part02of10) [P3](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q8_0.gguf.part03of10) [P4](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q8_0.gguf.part04of10) [P5](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q8_0.gguf.part05of10) [P6](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q8_0.gguf.part06of10) [P7](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q8_0.gguf.part07of10) [P8](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q8_0.gguf.part08of10) [P9](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q8_0.gguf.part09of10) [P10](https://huggingface.co/mradermacher/sarashina2-8x70b-GGUF/resolve/main/sarashina2-8x70b.Q8_0.gguf.part10of10) | Q8_0 | 493.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
xinyuyang9653/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-opaque_webbed_piranha | xinyuyang9653 | 2025-04-02T11:25:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am opaque webbed piranha",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T17:55:13Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-opaque_webbed_piranha
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am opaque webbed piranha
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-opaque_webbed_piranha
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="xinyuyang9653/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-opaque_webbed_piranha", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
tdooms/ss-small | tdooms | 2025-04-02T11:25:40Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-01T12:43:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
processprivate/magic-grpo-qwen2.5-7b | processprivate | 2025-04-02T11:24:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"grpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T05:09:37Z | ---
library_name: transformers
tags:
- unsloth
- trl
- grpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
okita-souji/ppo-SnowballTarget | okita-souji | 2025-04-02T11:22:56Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2025-04-02T11:22:53Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: okita-souji/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
buntynitin/summery-pro | buntynitin | 2025-04-02T11:22:11Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T11:22:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rayonlabs/86531bc7-9a29-4a70-8174-0ccaf49569b5-ef5cfd10a8a4c79c_dataset_json_X-Amz-Algorithm_AWS4-HMAC-SHA | rayonlabs | 2025-04-02T11:21:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T11:21:21Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KingEmpire/sn9_pre_c04_16 | KingEmpire | 2025-04-02T11:17:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T10:35:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Csd123/csd_8b_sft_0401_prompt2_fix_200 | Csd123 | 2025-04-02T11:14:06Z | 0 | 0 | null | [
"safetensors",
"internvl_chat",
"custom_code",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-02T08:16:44Z | ---
license: apache-2.0
---
|
betterdataai/large-sysmon-model | betterdataai | 2025-04-02T11:12:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T11:12:24Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** betterdataai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
davidmcmahon/omega_guard | davidmcmahon | 2025-04-02T11:10:18Z | 0 | 0 | sklearn | [
"sklearn",
"joblib",
"safety",
"guardrail",
"content-filtering",
"prompt-detection",
"machine-learning",
"en",
"license:mit",
"region:us"
]
| null | 2025-04-01T21:24:16Z | ---
language: en
library_name: sklearn
tags:
- safety
- guardrail
- content-filtering
- prompt-detection
- machine-learning
license: mit
---
# Omega Guard - Advanced LLM Prompt Safety Classifier
## Model Overview
Omega Guard is a sophisticated machine learning model designed to detect potentially harmful or malicious prompts in natural language interactions.
## Technical Specifications
- **Python Version**: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0]
- **Scikit-learn Version**: 1.6.1
- **NumPy Version**: 1.26.4
## Model Capabilities
- Advanced text and feature-based classification
- Comprehensive malicious prompt detection
- Multi-level security pattern recognition
- Scikit-learn compatible Random Forest classifier
## Use Cases
- Content moderation
- Prompt safety filtering
- AI interaction security screening
## Licensing
This model is released under the MIT License.
## Recommended Usage
Carefully evaluate and test the model in your specific use case. This is a machine learning model and may have limitations or biases.
## Performance Metrics
Please refer to the `performance_report.txt` for detailed classification performance.
## Contact
For more information or issues, please open a GitHub issue.
|
ShakhzoDavronov/mt-tokenizer-en-uz | ShakhzoDavronov | 2025-04-02T11:08:51Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T11:08:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso15/a0fd3e1a-c00e-44e4-adc7-0bf9624c4b5d | lesso15 | 2025-04-02T11:08:46Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-02T10:43:52Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a0fd3e1a-c00e-44e4-adc7-0bf9624c4b5d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d10b967ea7cde368_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d10b967ea7cde368_train_data.json
type:
field_input: system_prompt
field_instruction: problem
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso15/a0fd3e1a-c00e-44e4-adc7-0bf9624c4b5d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000215
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/d10b967ea7cde368_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 150
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 32af5279-ef1e-48a5-8213-ce38ca082835
wandb_project: 15a
wandb_run: your_name
wandb_runid: 32af5279-ef1e-48a5-8213-ce38ca082835
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a0fd3e1a-c00e-44e4-adc7-0bf9624c4b5d
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000215
- train_batch_size: 4
- eval_batch_size: 4
- seed: 150
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 1.1370 |
| 0.7099 | 0.3368 | 500 | 0.7037 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
iomegak12/llama-3-8b-chat-doctor-full | iomegak12 | 2025-04-02T11:08:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T11:03:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
polyglots/llama-3-8b-DPO-si-Sentiment-Tagger-14476-si-Sentiment-Tagger-DPO-Eval-2-7238 | polyglots | 2025-04-02T11:08:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:polyglots/llama-3-8b-DPO-si-Sentiment-Tagger-14476",
"base_model:finetune:polyglots/llama-3-8b-DPO-si-Sentiment-Tagger-14476",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T11:08:05Z | ---
base_model: polyglots/llama-3-8b-DPO-si-Sentiment-Tagger-14476
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** polyglots
- **License:** apache-2.0
- **Finetuned from model :** polyglots/llama-3-8b-DPO-si-Sentiment-Tagger-14476
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zurandmoro/c0ac81e2ef1d | zurandmoro | 2025-04-02T11:04:48Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-04-02T10:40:39Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: c0ac81e2ef1d
---
# C0Ac81E2Ef1D
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `c0ac81e2ef1d` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "c0ac81e2ef1d",
"lora_weights": "https://huggingface.co/zurandmoro/c0ac81e2ef1d/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('zurandmoro/c0ac81e2ef1d', weight_name='lora.safetensors')
image = pipeline('c0ac81e2ef1d').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/zurandmoro/c0ac81e2ef1d/discussions) to add images that show off what you’ve made with this LoRA.
|
vannu31/Akshay29540 | vannu31 | 2025-04-02T11:03:44Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-04-02T11:03:36Z | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Akshay29540
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Akshay29540
<Gallery />
## Model description
## Trigger words
You should use `Akshay29540` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/vannu31/Akshay29540/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
pjvm/model | pjvm | 2025-04-02T11:02:47Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T09:30:59Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pjvm
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
John6666/hana4chrome-v30-sdxl | John6666 | 2025-04-02T10:57:48Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"waifu",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2025-04-02T10:49:55Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- waifu
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1422278/hana-4-chrome?modelVersionId=1611547).
This model created by [CHROMEKIDD](https://civitai.com/user/CHROMEKIDD).
|
RichardErkhov/bshada_-_hw-infi-phi-gguf | RichardErkhov | 2025-04-02T10:56:03Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T09:24:37Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
hw-infi-phi - GGUF
- Model creator: https://huggingface.co/bshada/
- Original model: https://huggingface.co/bshada/hw-infi-phi/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [hw-infi-phi.Q2_K.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q2_K.gguf) | Q2_K | 1.35GB |
| [hw-infi-phi.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.IQ3_XS.gguf) | IQ3_XS | 1.49GB |
| [hw-infi-phi.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [hw-infi-phi.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [hw-infi-phi.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [hw-infi-phi.Q3_K.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q3_K.gguf) | Q3_K | 1.75GB |
| [hw-infi-phi.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [hw-infi-phi.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [hw-infi-phi.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [hw-infi-phi.Q4_0.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q4_0.gguf) | Q4_0 | 2.03GB |
| [hw-infi-phi.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [hw-infi-phi.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [hw-infi-phi.Q4_K.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q4_K.gguf) | Q4_K | 2.16GB |
| [hw-infi-phi.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [hw-infi-phi.Q4_1.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q4_1.gguf) | Q4_1 | 2.24GB |
| [hw-infi-phi.Q5_0.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q5_0.gguf) | Q5_0 | 2.46GB |
| [hw-infi-phi.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [hw-infi-phi.Q5_K.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q5_K.gguf) | Q5_K | 2.53GB |
| [hw-infi-phi.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [hw-infi-phi.Q5_1.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q5_1.gguf) | Q5_1 | 2.68GB |
| [hw-infi-phi.Q6_K.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q6_K.gguf) | Q6_K | 2.92GB |
| [hw-infi-phi.Q8_0.gguf](https://huggingface.co/RichardErkhov/bshada_-_hw-infi-phi-gguf/blob/main/hw-infi-phi.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alxvlsv/rubert-emotions | alxvlsv | 2025-04-02T10:53:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-02T10:52:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tsilva/clinicalfieldmapper | tsilva | 2025-04-02T10:52:19Z | 248 | 0 | transformers | [
"transformers",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"base_model:distilbert/distilgpt2",
"base_model:quantized:distilbert/distilgpt2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-07T16:38:55Z | ---
base_model: distilbert/distilgpt2
library_name: transformers
model_name: clinicalfieldmapper
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for clinicalfieldmapper
This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tsilva/clinicalfieldmapper", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/tsilva/tsilva_clinicalfieldmapper/runs/fuecdint)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
agittserhat/sarkastik_gemma | agittserhat | 2025-04-02T10:51:42Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T10:28:22Z | ---
base_model: google/gemma-3-1b-pt
library_name: transformers
model_name: sarkastik_gemma
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for sarkastik_gemma
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="agittserhat/sarkastik_gemma", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
beyoru/ReFunX | beyoru | 2025-04-02T10:47:27Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T10:43:52Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vannu31/Shivay29540 | vannu31 | 2025-04-02T10:47:05Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-04-02T10:46:56Z | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Shivay29540
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Shivay29540
<Gallery />
## Model description
## Trigger words
You should use `Shivay29540` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/vannu31/Shivay29540/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
vannu31/Latika29540 | vannu31 | 2025-04-02T10:45:42Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-04-02T10:45:34Z | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Latika29540
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Latika29540
<Gallery />
## Model description
## Trigger words
You should use `Latika29540` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/vannu31/Latika29540/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
Bobaduck9173/sdxl_meme_fifth | Bobaduck9173 | 2025-04-02T10:43:40Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-04-02T10:43:16Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of TOK dog
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Bobaduck9173/sdxl_meme_fifth
<Gallery />
## Model description
These are Bobaduck9173/sdxl_meme_fifth LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Bobaduck9173/sdxl_meme_fifth/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
beyoru/ReFunX_lora | beyoru | 2025-04-02T10:43:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T10:43:07Z | ---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** beyoru
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
linger2334/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bipedal_tall_whale | linger2334 | 2025-04-02T10:41:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am bipedal tall whale",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T18:45:29Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bipedal_tall_whale
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am bipedal tall whale
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bipedal_tall_whale
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="linger2334/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bipedal_tall_whale", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.2
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Cloneofsleep/rousseau_style_LoRA | Cloneofsleep | 2025-04-02T10:40:28Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-04-02T10:40:23Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: painting in ROUSSEAU style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Cloneofsleep/rousseau_style_LoRA
<Gallery />
## Model description
These are Cloneofsleep/rousseau_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use painting in ROUSSEAU style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Cloneofsleep/rousseau_style_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
FoundationVision/unitok_mllm | FoundationVision | 2025-04-02T10:39:16Z | 0 | 0 | null | [
"safetensors",
"mini_gemini",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-02T10:10:20Z | ---
license: apache-2.0
---
|
stepetal/Vikhr_8b_LoRA_epoch_3_v1 | stepetal | 2025-04-02T10:37:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:Vikhrmodels/Vikhr-Llama3.1-8B-Instruct-R-21-09-24",
"base_model:finetune:Vikhrmodels/Vikhr-Llama3.1-8B-Instruct-R-21-09-24",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T10:36:58Z | ---
base_model: Vikhrmodels/Vikhr-Llama3.1-8B-Instruct-R-21-09-24
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** stepetal
- **License:** apache-2.0
- **Finetuned from model :** Vikhrmodels/Vikhr-Llama3.1-8B-Instruct-R-21-09-24
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Best10Coder/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF | Best10Coder | 2025-04-02T10:36:19Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T10:36:12Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: transformers
license: mit
tags:
- llama-cpp
- gguf-my-repo
---
# Best10Coder/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Best10Coder/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Best10Coder/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Best10Coder/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Best10Coder/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q4_k_m.gguf -c 2048
```
|
TareksLab/Wordsmith-V3.0-LLaMa-70B | TareksLab | 2025-04-02T10:34:12Z | 73 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:Sao10K/70B-L3.3-mhnnn-x1",
"base_model:merge:Sao10K/70B-L3.3-mhnnn-x1",
"base_model:Sao10K/L3.1-70B-Hanami-x1",
"base_model:merge:Sao10K/L3.1-70B-Hanami-x1",
"base_model:huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated",
"base_model:merge:huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated",
"base_model:nbeerbower/Llama3.1-Gutenberg-Doppel-70B",
"base_model:merge:nbeerbower/Llama3.1-Gutenberg-Doppel-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-27T20:00:16Z | ---
base_model:
- Sao10K/70B-L3.3-mhnnn-x1
- huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated
- Sao10K/L3.1-70B-Hanami-x1
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- nbeerbower/Llama3.1-Gutenberg-Doppel-70B
library_name: transformers
tags:
- mergekit
- merge
---
# MERGE2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using [nbeerbower/Llama3.1-Gutenberg-Doppel-70B](https://huggingface.co/nbeerbower/Llama3.1-Gutenberg-Doppel-70B) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/70B-L3.3-mhnnn-x1](https://huggingface.co/Sao10K/70B-L3.3-mhnnn-x1)
* [huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated)
* [Sao10K/L3.1-70B-Hanami-x1](https://huggingface.co/Sao10K/L3.1-70B-Hanami-x1)
* [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: Sao10K/L3.1-70B-Hanami-x1
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: Sao10K/70B-L3.3-mhnnn-x1
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B
parameters:
weight: 0.20
density: 0.7
epsilon: 0.1
lambda: 1
base_model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B
merge_method: della_linear
parameters:
normalize: false
tokenizer:
source: Sao10K/70B-L3.3-mhnnn-x1
dtype: bfloat16
chat_template: llama3
```
|
Ashishdalmia/lora_model | Ashishdalmia | 2025-04-02T10:32:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T10:32:34Z | ---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Ashishdalmia
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
partzel/distilbert-base-uncased-finetuned-imdb | partzel | 2025-04-02T10:29:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2025-04-02T09:23:33Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6916 | 1.0 | 157 | 2.5012 |
| 2.5716 | 2.0 | 314 | 2.4708 |
| 2.5272 | 3.0 | 471 | 2.4558 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
lesso18/395077dd-ebac-4678-91d0-bca06947cad8 | lesso18 | 2025-04-02T10:29:13Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
]
| null | 2025-04-02T09:23:38Z | ---
library_name: peft
license: mit
base_model: microsoft/phi-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 395077dd-ebac-4678-91d0-bca06947cad8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/phi-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 206b765083289506_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/206b765083289506_train_data.json
type:
field_instruction: first_message
field_output: first_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso18/395077dd-ebac-4678-91d0-bca06947cad8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000218
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/206b765083289506_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 180
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 12e1d080-67be-43a7-8656-024df6330132
wandb_project: 18a
wandb_run: your_name
wandb_runid: 12e1d080-67be-43a7-8656-024df6330132
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 395077dd-ebac-4678-91d0-bca06947cad8
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000218
- train_batch_size: 4
- eval_batch_size: 4
- seed: 180
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0015 | 1 | 2.3524 |
| 2.1899 | 0.7560 | 500 | 2.0440 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Sophie-Rain-Sophie-Rain-Spiderman-Video/HOT.Leaked.Sophie.Rain.Spider-Man.Video.Tutorial.Official | Sophie-Rain-Sophie-Rain-Spiderman-Video | 2025-04-02T10:24:55Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-02T10:24:00Z | <animated-image data-catalyst=""><a href="https://viralleakedvideo.com/new-leaked-video/?sophie-rain-spiderman-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
sophie rain spiderman video took the internet by storm and amazed viewers on various
social media platforms sophie rain spiderman a young and talented digital creator
recently became famous thanks to this interesting video
sophie rain is a rising star in the digital content creation world known for her engaging
videos that often incorporate humor creativity and relatable themes with a strong
presence on various social media platforms she has garnered a dedicated following
thanks to her unique approach to storytelling and her vibrant personality
garnered immense attention across social media platforms this article aims to guide you
on how to watch the video safely and responsibly sophie the fan video showing sophie
rain spiderman leaked twitter viral |
RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf | RichardErkhov | 2025-04-02T10:20:17Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T08:54:15Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
FTAudit_phi_3_5_mini_v1 - GGUF
- Model creator: https://huggingface.co/weifar/
- Original model: https://huggingface.co/weifar/FTAudit_phi_3_5_mini_v1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [FTAudit_phi_3_5_mini_v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q2_K.gguf) | Q2_K | 1.35GB |
| [FTAudit_phi_3_5_mini_v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.IQ3_XS.gguf) | IQ3_XS | 1.49GB |
| [FTAudit_phi_3_5_mini_v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [FTAudit_phi_3_5_mini_v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [FTAudit_phi_3_5_mini_v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [FTAudit_phi_3_5_mini_v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q3_K.gguf) | Q3_K | 1.75GB |
| [FTAudit_phi_3_5_mini_v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [FTAudit_phi_3_5_mini_v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [FTAudit_phi_3_5_mini_v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [FTAudit_phi_3_5_mini_v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q4_0.gguf) | Q4_0 | 2.03GB |
| [FTAudit_phi_3_5_mini_v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [FTAudit_phi_3_5_mini_v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [FTAudit_phi_3_5_mini_v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q4_K.gguf) | Q4_K | 2.16GB |
| [FTAudit_phi_3_5_mini_v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [FTAudit_phi_3_5_mini_v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q4_1.gguf) | Q4_1 | 2.24GB |
| [FTAudit_phi_3_5_mini_v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q5_0.gguf) | Q5_0 | 2.46GB |
| [FTAudit_phi_3_5_mini_v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [FTAudit_phi_3_5_mini_v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q5_K.gguf) | Q5_K | 2.53GB |
| [FTAudit_phi_3_5_mini_v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [FTAudit_phi_3_5_mini_v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q5_1.gguf) | Q5_1 | 2.68GB |
| [FTAudit_phi_3_5_mini_v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q6_K.gguf) | Q6_K | 2.92GB |
| [FTAudit_phi_3_5_mini_v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/weifar_-_FTAudit_phi_3_5_mini_v1-gguf/blob/main/FTAudit_phi_3_5_mini_v1.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
harshhmaniya/DeepSeek-R1_Fine_Tuned_Medical | harshhmaniya | 2025-04-02T10:19:56Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T09:13:18Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** harshhmaniya
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ytu-ce-cosmos/backward-cosmos-gpt2-v1 | ytu-ce-cosmos | 2025-04-02T10:16:09Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"tr",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T10:09:27Z | ---
language:
- tr
library_name: transformers
---
# Backward GPT-2 Model
## Overview
A GPT-2 model fine-tuned for backward generation (answers → questions) in Turkish.
## Input Format
```
### Response:
[answer text in Turkish]
```
## Output Format
Generated text must be reversed to obtain:
```
### Instruction:
[instruction text in Turkish]
### Input:
[optional input text in Turkish]
```
## Generation Parameters
- Temperature: 1.4
- Top-p: 0.95
- Top-k: 20
- Repetition penalty: 1.5
- EOS token IDs: [36320, eos_token_id]
## Example Code
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("ytu-ce-cosmos/backward-cosmos-gpt2-v1", trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained("ytu-ce-cosmos/backward-cosmos-gpt2-v1")
# Turkish answer
answer = "İstanbul, Türkiye'nin en kalabalık şehridir ve tarihi, kültürel zenginliği ile ünlüdür."
prompt = f"\n### Response:\n{answer}"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens=40,
temperature=1.4,
top_p=0.95,
top_k=20,
repetition_penalty=1.5,
eos_token_id=[36320, tokenizer.eos_token_id],
pad_token_id=tokenizer.eos_token_id
)
generated_tokens = outputs[0][inputs.input_ids.shape[1]:]
reversed_tokens = generated_tokens.flip(dims=[0])
generated_text = tokenizer.decode(reversed_tokens, skip_special_tokens=True)
parts = generated_text.split("### Input:")
instruction = parts[0].replace("### Instruction:", "").strip()
input_text = parts[1].strip() if len(parts) > 1 else None
``` |
llearningone/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_grassy_kangaroo | llearningone | 2025-04-02T10:15:57Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am dappled grassy kangaroo",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T22:49:01Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_grassy_kangaroo
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am dappled grassy kangaroo
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_grassy_kangaroo
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="llearningone/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_grassy_kangaroo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
kiriyk/test_download_2 | kiriyk | 2025-04-02T10:15:55Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-02T10:15:09Z | ---
base_model: Llama-3.2-3B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kiriyk
- **License:** apache-2.0
- **Finetuned from model :** Llama-3.2-3B-Instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
buelfhood/unixcoder-base-m2v-pca256 | buelfhood | 2025-04-02T10:15:17Z | 0 | 0 | model2vec | [
"model2vec",
"safetensors",
"embeddings",
"static-embeddings",
"sentence-transformers",
"en",
"base_model:microsoft/unixcoder-base",
"base_model:finetune:microsoft/unixcoder-base",
"license:mit",
"region:us"
]
| null | 2025-04-02T10:14:29Z | ---
base_model: microsoft/unixcoder-base
language:
- en
library_name: model2vec
license: mit
model_name: buelfhood/unixcoder-base-m2v-pca256
tags:
- embeddings
- static-embeddings
- sentence-transformers
---
# buelfhood/unixcoder-base-m2v-pca256 Model Card
This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of the microsoft/unixcoder-base(https://huggingface.co/microsoft/unixcoder-base) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. Model2Vec models are the smallest, fastest, and most performant static embedders available. The distilled models are up to 50 times smaller and 500 times faster than traditional Sentence Transformers.
## Installation
Install model2vec using pip:
```
pip install model2vec
```
## Usage
### Using Model2Vec
The [Model2Vec library](https://github.com/MinishLab/model2vec) is the fastest and most lightweight way to run Model2Vec models.
Load this model using the `from_pretrained` method:
```python
from model2vec import StaticModel
# Load a pretrained Model2Vec model
model = StaticModel.from_pretrained("buelfhood/unixcoder-base-m2v-pca256")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```
### Using Sentence Transformers
You can also use the [Sentence Transformers library](https://github.com/UKPLab/sentence-transformers) to load and use the model:
```python
from sentence_transformers import SentenceTransformer
# Load a pretrained Sentence Transformer model
model = SentenceTransformer("buelfhood/unixcoder-base-m2v-pca256")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```
### Distilling a Model2Vec model
You can distill a Model2Vec model from a Sentence Transformer model using the `distill` method. First, install the `distill` extra with `pip install model2vec[distill]`. Then, run the following code:
```python
from model2vec.distill import distill
# Distill a Sentence Transformer model, in this case the BAAI/bge-base-en-v1.5 model
m2v_model = distill(model_name="BAAI/bge-base-en-v1.5", pca_dims=256)
# Save the model
m2v_model.save_pretrained("m2v_model")
```
## How it works
Model2vec creates a small, fast, and powerful model that outperforms other static embedding models by a large margin on all tasks we could find, while being much faster to create than traditional static embedding models such as GloVe. Best of all, you don't need any data to distill a model using Model2Vec.
It works by passing a vocabulary through a sentence transformer model, then reducing the dimensionality of the resulting embeddings using PCA, and finally weighting the embeddings using [SIF weighting](https://openreview.net/pdf?id=SyK00v5xx). During inference, we simply take the mean of all token embeddings occurring in a sentence.
## Additional Resources
- [Model2Vec Repo](https://github.com/MinishLab/model2vec)
- [Model2Vec Base Models](https://huggingface.co/collections/minishlab/model2vec-base-models-66fd9dd9b7c3b3c0f25ca90e)
- [Model2Vec Results](https://github.com/MinishLab/model2vec/tree/main/results)
- [Model2Vec Tutorials](https://github.com/MinishLab/model2vec/tree/main/tutorials)
- [Website](https://minishlab.github.io/)
## Library Authors
Model2Vec was developed by the [Minish Lab](https://github.com/MinishLab) team consisting of [Stephan Tulkens](https://github.com/stephantul) and [Thomas van Dongen](https://github.com/Pringled).
## Citation
Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work.
```
@software{minishlab2024model2vec,
authors = {Stephan Tulkens and Thomas van Dongen},
title = {Model2Vec: Fast State-of-the-Art Static Embeddings},
year = {2024},
url = {https://github.com/MinishLab/model2vec}
}
``` |
aisingapore/gemma2-9b-cpt-sea-lionv3-instruct | aisingapore | 2025-04-02T10:13:05Z | 1,864 | 10 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"en",
"zh",
"vi",
"id",
"th",
"fil",
"ta",
"ms",
"km",
"lo",
"my",
"jv",
"su",
"arxiv:2309.06085",
"arxiv:2311.07911",
"arxiv:2306.05685",
"base_model:aisingapore/gemma2-9b-cpt-sea-lionv3-base",
"base_model:finetune:aisingapore/gemma2-9b-cpt-sea-lionv3-base",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-10-30T03:19:20Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- aisingapore/gemma2-9b-cpt-sea-lionv3-base
language:
- en
- zh
- vi
- id
- th
- fil
- ta
- ms
- km
- lo
- my
- jv
- su
license: gemma
---
<div>
<img src="gemma_2_9b_sea-lion_v3_instruct_banner.png"/>
</div>
# Gemma2 9B CPT SEA-LIONv3 Instruct
SEA-LION is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
Gemma2 9B CPT SEA-LIONv3 Instruct is a multilingual model which has been fine-tuned with around **500,000 English instruction-completion pairs** alongside a larger pool of around **1,000,000 instruction-completion pairs** from other ASEAN languages, such as Indonesian, Thai and Vietnamese.
SEA-LION stands for _Southeast Asian Languages In One Network_.
- **Developed by:** Products Pillar, AI Singapore
- **Funded by:** Singapore NRF
- **Model type:** Decoder
- **Languages supported:** Burmese, Chinese, English, Filipino, Indonesia, Javanese, Khmer, Lao, Malay, Sundanese, Tamil, Thai, Vietnamese
- **License:** [Gemma Community License](https://ai.google.dev/gemma/terms)
## Model Details
### Model Description
We performed instruction tuning in English and also in ASEAN languages such as Indonesian, Thai and Vietnamese on our [continued pre-trained Gemma2 9B CPT SEA-LIONv3](https://huggingface.co/aisingapore/gemma2-9b-cpt-sea-lionv3-base), a decoder model using the Gemma2 architecture, to create Gemma2 9B CPT SEA-LIONv3 Instruct.
For tokenisation, the model employs the default tokenizer used in Gemma-2-9B. The model has a context length of 8192.
### Benchmark Performance
We evaluated Gemma2 9B CPT SEA-LIONv3 Instruct on both general language capabilities and instruction-following capabilities.
#### General Language Capabilities
For the evaluation of general language capabilities, we employed the [SEA HELM (also known as BHASA) evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks.
These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).
Note: SEA HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance.
The evaluation was done **zero-shot** with native prompts on a sample of 100-1000 instances for each dataset.
#### Instruction-following Capabilities
Since Gemma2 9B CPT SEA-LIONv3 Instruct is an instruction-following model, we also evaluated it on instruction-following capabilities with two datasets, [IFEval](https://arxiv.org/abs/2311.07911) and [MT-Bench](https://arxiv.org/abs/2306.05685).
As these two datasets were originally in English, the linguists and native speakers in the team worked together to filter, localize and translate the datasets into the respective target languages to ensure that the examples remained reasonable, meaningful and natural.
**IFEval**
IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. Additionally, accuracy is normalized by the proportion of responses in the correct language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task).
**MT-Bench**
MT-Bench evaluates a model's ability to engage in multi-turn (2 turns) conversations and respond in ways that align with human needs. We use `gpt-4-1106-preview` as the judge model and compare against `gpt-3.5-turbo-0125` as the baseline model. The metric used is the weighted win rate against the baseline model (i.e. average win rate across each category: Math, Reasoning, STEM, Humanities, Roleplay, Writing, Extraction). A tie is given a score of 0.5.
For more details on Gemma2 9B CPT SEA-LIONv3 Instruct benchmark performance, please refer to the SEA HELM leaderboard, https://leaderboard.sea-lion.ai/
### Usage
**NOTE** This model has not been trained to use a system prompt or to use tool calling.
Gemma2 9B CPT SEA-LIONv3 Instruct can be run using the 🤗 Transformers library
```python
# Please use transformers==4.45.2
import transformers
import torch
model_id = "aisingapore/gemma2-9b-cpt-sea-lionv3-instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "user", "content": "Apa sentimen dari kalimat berikut ini?\nKalimat: Buku ini sangat membosankan.\nJawaban: "},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
### Caveats
It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning.
## Limitations
### Safety
Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
## Technical Specifications
### Fine-Tuning Details
Gemma2 9B CPT SEA-LIONv3 Instruct was built using a combination of a full parameter fine-tune, on-policy alignment, and model merges of the best performing checkpoints. The training process for fine-tuning was approximately 15 hours, with alignment taking 2 hours, both on 8x H100-80GB GPUs.
## Data
Gemma2 9B CPT SEA-LIONv3 Instruct was trained on a wide range of synthetic instructions, alongside publicly available instructions hand-curated by the team with the assistance of native speakers. In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.
<details>
<summary><strong>Show Fine-Tuning Data Breakdown</strong></summary>
| Size | Source |
|---------|---------------------------------------------------------------------------------|
| 72441 | AI-MO/NuminaMath-TIR |
| 4335460 | AI Singapore* |
| 8906033 | BAAI/Infinity-Instruct |
| 676803 | HuggingFaceTB/smoltalk |
| 61492 | Post-training-Data-Flywheel/AutoIF-instruct-61k |
| 10000 | ai2-adapt-dev/tulu_v3.9_sciriff_10k |
| 50000 | ai2-adapt-dev/tulu_v3.9_synthetic_finalresp_wildguardmixtrain_decontaminated_50k |
| 50000 | ai2-adapt-dev/tulu_v3.9_wildjailbreak_decontaminated_50k |
| 25014 | airesearch/WangchanThaiInstruct |
| 10983 | allenai/coconot |
| 20000 | allenai/tulu-3-sft-personas-algebra |
| 34999 | allenai/tulu-3-sft-personas-code |
| 29980 | allenai/tulu-3-sft-personas-instruction-following |
| 149960 | allenai/tulu-3-sft-personas-math |
| 49980 | allenai/tulu-3-sft-personas-math-grade |
| 15378 | arcee-ai/EvolKit-20k-vi |
| 74174 | arcee-ai/EvolKit-75K |
| 56339 | argilla/ifeval-like-data |
| 2000000 | nvidia/OpenMathInstruct-2 |
| 118898 | parinzee/seed-free-synthetic-instruct-thai-v1 |
<footer style="text-align:left; font-size:small;">
*Datasets from AI Singapore are a combination of synthetic generations from stronger models and handwritten instructions centered around Southeast Asian culture (particularly from Project SEALD), general instruction-following and chat prompt-response pairs in Southeast Asian languages.
</footer>
</details>
## Indonesian, Javanese & Sudanese Specific SEA-LION
Our partners at GoTo have continued pretrained and instruction tuned a variant of Gemma2 9B CPT SEA-LIONv3, specifically enhancing its capabilities for Indonesian, Javanese, and Sundanese languages. Find the continued pretrained model at [Gemma2 9B CPT SahabatAIv1 Base](https://huggingface.co/GoToCompany/gemma2-9b-cpt-sahabatai-v1-base), and its corresponding instructioned tuned version at [Gemma2 9B CPT SahabatAIv1 Instruct](https://huggingface.co/GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct).
## Call for Contributions
We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions.
## The Team
Chan Adwin, Cheng Nicholas, Choa Esther, Huang Yuli, Hulagadri Adithya Venkatadri, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teng Walter, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Yeo Yeow Tong, Yong Xianbin
## Acknowledgements
[AI Singapore](https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.
## Contact
For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6)
[Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion)
## Disclaimer
This is the repository for the commercial instruction-tuned model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes. |
SimoneManai/granite-3.1-8b-instruct-Empathy | SimoneManai | 2025-04-02T10:12:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"granite",
"text-generation",
"conversational",
"it",
"dataset:SimoneManai/IDRE",
"arxiv:1910.09700",
"base_model:ibm-granite/granite-3.1-8b-instruct",
"base_model:finetune:ibm-granite/granite-3.1-8b-instruct",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T10:07:39Z | ---
library_name: transformers
datasets:
- SimoneManai/IDRE
language:
- it
base_model:
- ibm-granite/granite-3.1-8b-instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sophie-Rain-Sophie-Rain-Spiderman-Video/Leaked.Videos.Sophie.Rain.Spider-Man.Video.Tutorial.Official | Sophie-Rain-Sophie-Rain-Spiderman-Video | 2025-04-02T10:09:49Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-02T10:08:24Z | <animated-image data-catalyst=""><a href="https://viralleakedvideo.com/new-leaked-video/?sophie-rain-spiderman-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
xbinbin/deepseek_accessment_0_2000_4.2.model | xbinbin | 2025-04-02T10:05:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T10:04:58Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xbinbin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lesso16/be15df44-07bc-45e2-8c26-69d609bf12db | lesso16 | 2025-04-02T10:04:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-rw-1b",
"base_model:adapter:tiiuae/falcon-rw-1b",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-02T07:28:19Z | ---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-rw-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: be15df44-07bc-45e2-8c26-69d609bf12db
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-rw-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3532cc1d38f8b21a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3532cc1d38f8b21a_train_data.json
type:
field_input: input_format
field_instruction: prompt
field_output: generation
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso16/be15df44-07bc-45e2-8c26-69d609bf12db
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000216
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/3532cc1d38f8b21a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 160
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2bd4821c-9992-4dfb-89e7-d732b8147c9e
wandb_project: 16a
wandb_run: your_name
wandb_runid: 2bd4821c-9992-4dfb-89e7-d732b8147c9e
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# be15df44-07bc-45e2-8c26-69d609bf12db
This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000216
- train_batch_size: 4
- eval_batch_size: 4
- seed: 160
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0010 | 1 | 2.2269 |
| 7.2395 | 0.4893 | 500 | 0.9008 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Sophie-Rain-Sophie-Rain-Spiderman-Video/liVE-Sophie.Rain.Spiderman.Video.Tutorial.Official | Sophie-Rain-Sophie-Rain-Spiderman-Video | 2025-04-02T10:04:05Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-02T10:03:09Z | <animated-image data-catalyst=""><a href="https://viralleakedvideo.com/new-leaked-video/?sophie-rain-spiderman-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
csukuangfj/sherpa-onnx-dolphin-base-ctc-multi-lang-2025-04-02 | csukuangfj | 2025-04-02T10:03:14Z | 0 | 0 | null | [
"onnx",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-02T08:16:18Z | ---
license: apache-2.0
---
# Introduction
This model is converted from
https://github.com/DataoceanAI/Dolphin
Only the CTC branch is used. |
jahyungu/falcon3-1b-inst-limo | jahyungu | 2025-04-02T10:01:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:tiiuae/Falcon3-1B-Instruct",
"base_model:finetune:tiiuae/Falcon3-1B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T09:59:48Z | ---
library_name: transformers
license: other
base_model: tiiuae/Falcon3-1B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: falcon3-1B-inst-limo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon3-1B-inst-limo
This model is a fine-tuned version of [../models/falcon3-1b-instruct](https://huggingface.co/../models/falcon3-1b-instruct) on the limo_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
lesso17/c4cca79e-b1de-4596-b9d0-9788881b81e1 | lesso17 | 2025-04-02T10:00:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-02T06:59:58Z | ---
library_name: peft
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c4cca79e-b1de-4596-b9d0-9788881b81e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: teknium/OpenHermes-2.5-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7d58700124812ea0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7d58700124812ea0_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso17/c4cca79e-b1de-4596-b9d0-9788881b81e1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000217
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/7d58700124812ea0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 170
sequence_len: 1024
special_tokens:
pad_token: <|im_end|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4d857695-35c0-4535-8648-b9198bda006a
wandb_project: 17a
wandb_run: your_name
wandb_runid: 4d857695-35c0-4535-8648-b9198bda006a
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c4cca79e-b1de-4596-b9d0-9788881b81e1
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000217
- train_batch_size: 4
- eval_batch_size: 4
- seed: 170
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 0.4617 |
| 1.4319 | 0.0677 | 500 | 0.1779 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Shishir1807/drug-llama3-2-3b | Shishir1807 | 2025-04-02T09:58:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2025-04-02T09:55:59Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.45.0
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
- Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="Shishir1807/drug-llama3-2-3b",
torch_dtype="auto",
trust_remote_code=True,
device_map={"": "cuda:0"},
token=True,
)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 256
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
messages = [
{
"role": "system",
"content": "You are a friendly and polite chatbot.",
},
{"role": "user", "content": "Hi, how are you?"},
{"role": "assistant", "content": "I'm doing great, how about you?"},
{"role": "user", "content": "Why is drinking water so healthy?"},
]
res = generate_text(
messages,
renormalize_logits=True
)
print(res[0]["generated_text"][-1]['content'])
```
You can print a sample prompt after applying chat template to see how it is feed to the tokenizer:
```python
print(generate_text.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
))
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Shishir1807/drug-llama3-2-3b" # either local folder or Hugging Face model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
messages = [
{
"role": "system",
"content": "You are a friendly and polite chatbot.",
},
{"role": "user", "content": "Hi, how are you?"},
{"role": "assistant", "content": "I'm doing great, how about you?"},
{"role": "user", "content": "Why is drinking water so healthy?"},
]
tokenizer = AutoTokenizer.from_pretrained(
model_name,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
# generate configuration can be modified to your needs
# model.generation_config.min_new_tokens = 2
# model.generation_config.max_new_tokens = 256
# model.generation_config.do_sample = False
# model.generation_config.num_beams = 1
# model.generation_config.temperature = float(0.0)
# model.generation_config.repetition_penalty = float(1.0)
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
).to("cuda")
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(128256, 3072, padding_idx=128009)
(layers): ModuleList(
(0-27): 28 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=3072, out_features=3072, bias=False)
(k_proj): Linear(in_features=3072, out_features=1024, bias=False)
(v_proj): Linear(in_features=3072, out_features=1024, bias=False)
(o_proj): Linear(in_features=3072, out_features=3072, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=3072, out_features=8192, bias=False)
(up_proj): Linear(in_features=3072, out_features=8192, bias=False)
(down_proj): Linear(in_features=8192, out_features=3072, bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm((3072,), eps=1e-05)
(post_attention_layernorm): LlamaRMSNorm((3072,), eps=1e-05)
)
)
(norm): LlamaRMSNorm((3072,), eps=1e-05)
(rotary_emb): LlamaRotaryEmbedding()
)
(lm_head): Linear(in_features=3072, out_features=128256, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
tmd-rahul/dialoGPT-chatbot | tmd-rahul | 2025-04-02T09:57:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:google/gemma-2b-it",
"base_model:finetune:google/gemma-2b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T02:58:27Z | ---
library_name: transformers
license: gemma
base_model: google/gemma-2b-it
tags:
- generated_from_trainer
model-index:
- name: dialoGPT-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dialoGPT-chatbot
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
el-desdiva/proj_output_LoRA | el-desdiva | 2025-04-02T09:56:40Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-04-02T09:56:34Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: illustration in PROJ style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - el-desdiva/proj_output_LoRA
<Gallery />
## Model description
These are el-desdiva/proj_output_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use illustration in PROJ style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](el-desdiva/proj_output_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
jainnn/harharmahadev | jainnn | 2025-04-02T09:56:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T09:55:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Wenfi/distillation-T5-cnn | Wenfi | 2025-04-02T09:56:06Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-03-26T11:41:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sehereroglu/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scurrying_loud_marmot | sehereroglu | 2025-04-02T09:55:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scurrying loud marmot",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T09:42:12Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scurrying_loud_marmot
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scurrying loud marmot
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scurrying_loud_marmot
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sehereroglu/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scurrying_loud_marmot", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
SimoneManai/Mistral-7B-Instruct-FT-Empathy | SimoneManai | 2025-04-02T09:51:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"it",
"dataset:SimoneManai/IDRE",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T08:41:47Z | ---
library_name: transformers
datasets:
- SimoneManai/IDRE
language:
- it
base_model:
- mistralai/Mistral-7B-Instruct-v0.3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nikunjihub/testmodel3 | nikunjihub | 2025-04-02T09:48:26Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-02T09:48:26Z | ---
license: apache-2.0
---
|
lesso15/a27bddf0-0478-4555-8817-3ef9df89caab | lesso15 | 2025-04-02T09:47:28Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:tlphams/gollm-12.8b-instruct-v2.3",
"base_model:adapter:tlphams/gollm-12.8b-instruct-v2.3",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2025-04-02T05:00:53Z | ---
library_name: peft
license: cc-by-nc-4.0
base_model: tlphams/gollm-12.8b-instruct-v2.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a27bddf0-0478-4555-8817-3ef9df89caab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tlphams/gollm-12.8b-instruct-v2.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b782f17e0b29ece4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b782f17e0b29ece4_train_data.json
type:
field_instruction: user_prompt
field_output: resp
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso15/a27bddf0-0478-4555-8817-3ef9df89caab
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000215
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/b782f17e0b29ece4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 150
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eb14c764-469b-4c92-afa7-2fe2bf946626
wandb_project: 15a
wandb_run: your_name
wandb_runid: eb14c764-469b-4c92-afa7-2fe2bf946626
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a27bddf0-0478-4555-8817-3ef9df89caab
This model is a fine-tuned version of [tlphams/gollm-12.8b-instruct-v2.3](https://huggingface.co/tlphams/gollm-12.8b-instruct-v2.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000215
- train_batch_size: 4
- eval_batch_size: 4
- seed: 150
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 1.4915 |
| 2.3473 | 0.4719 | 500 | 0.2866 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
helenabon/hatedemics-v1-llama | helenabon | 2025-04-02T09:46:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
]
| null | 2025-04-02T09:46:04Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.0 |
UsernameNguyen/vit-base-patch16-224-in21k-lora | UsernameNguyen | 2025-04-02T09:46:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T09:46:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wandererupak/wave2vec-bert-oslrULTIMATECOLAB-TAKE-3 | wandererupak | 2025-04-02T09:44:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-04-02T06:52:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Lawrence/lora_model_orpheus_TTS | Lawrence | 2025-04-02T09:43:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-02T09:43:27Z | ---
base_model: unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Lawrence
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
milanakdj/amias_8b_4bit_finetuned | milanakdj | 2025-04-02T09:42:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T09:39:58Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** milanakdj
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Runaweygek/Qwen2.5-14B-sex-v2-lora-F16-GGUF | Runaweygek | 2025-04-02T09:41:31Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"llama-cpp",
"gguf-my-lora",
"text-generation",
"en",
"zh",
"base_model:likewendy/Qwen2.5-14B-sex-v2-lora",
"base_model:quantized:likewendy/Qwen2.5-14B-sex-v2-lora",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-02T09:41:29Z | ---
base_model: likewendy/Qwen2.5-14B-sex-v2-lora
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- llama-cpp
- gguf-my-lora
license: apache-2.0
language:
- en
- zh
pipeline_tag: text-generation
---
# Runaweygek/Qwen2.5-14B-sex-v2-lora-F16-GGUF
This LoRA adapter was converted to GGUF format from [`likewendy/Qwen2.5-14B-sex-v2-lora`](https://huggingface.co/likewendy/Qwen2.5-14B-sex-v2-lora) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/likewendy/Qwen2.5-14B-sex-v2-lora) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora Qwen2.5-14B-sex-v2-lora-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora Qwen2.5-14B-sex-v2-lora-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
yangwooko/smartmind-cyberone-20250401 | yangwooko | 2025-04-02T09:40:24Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-02T08:35:23Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: smartmind-cyberone-20250401
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smartmind-cyberone-20250401
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0987 | 1.896 | 30 | 0.3167 |
| 0.0177 | 3.768 | 60 | 0.1030 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
andreamaduzzi/LLaNA-7B | andreamaduzzi | 2025-04-02T09:40:17Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"nerfllm",
"text-generation",
"en",
"dataset:andreamaduzzi/ShapeNeRF-Text",
"base_model:meta-llama/Llama-2-7b",
"base_model:finetune:meta-llama/Llama-2-7b",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-10-23T08:02:21Z | ---
license: mit
language:
- en
base_model:
- meta-llama/Llama-2-7b
library_name: transformers
pipeline_tag: text-generation
datasets:
- andreamaduzzi/ShapeNeRF-Text
--- |
taguser/openshift-tests-private-full-epoch10-2025-Mar-27 | taguser | 2025-04-02T09:38:54Z | 2 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Coder-14B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-14B-Instruct",
"license:other",
"region:us"
]
| null | 2025-03-27T19:49:34Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-Coder-14B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) on the openshift-tests-private-full dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.15.0
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0 |
taguser/mpc-training-2025-03-23 | taguser | 2025-04-02T09:38:07Z | 48 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:other",
"region:us"
]
| null | 2025-03-23T10:27:55Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the mpc_training dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.12.0
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0 |
Subsets and Splits