modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
KnutJaegersberg/Eagle2-1B | KnutJaegersberg | 2025-04-30T12:19:19Z | 10 | 1 | transformers | [
"transformers",
"safetensors",
"eagle_chat",
"feature-extraction",
"eagle",
"VLM",
"image-text-to-text",
"conversational",
"custom_code",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:merge:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:google/paligemma-3b-mix-448",
"base_model:merge:google/paligemma-3b-mix-448",
"base_model:google/siglip-so400m-patch14-384",
"base_model:merge:google/siglip-so400m-patch14-384",
"license:cc-by-nc-4.0",
"region:us"
] | image-text-to-text | 2025-01-23T08:50:02Z | ---
license: cc-by-nc-4.0
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- google/paligemma-3b-mix-448
- Qwen/Qwen2.5-0.5B-Instruct
- google/siglip-so400m-patch14-384
base_model_relation: merge
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
tags:
- eagle
- VLM
---
# Eagle-2
[\[📂 GitHub\]](https://github.com/NVlabs/EAGLE) [\[📜 Eagle2 Tech Report\]](https://github.com/NVlabs/EAGLE/blob/main/Eagle2/Eagle2_report.pdf)
[\[🗨️ Chat Demo\]](http://eagle-vlm.xyz/) [\[🤗 HF Demo\]](TODO)
## Introduction
We are thrilled to release our latest Eagle2 series Vision-Language Model. Open-source Vision-Language Models (VLMs) have made significant strides in narrowing the gap with proprietary models. However, critical details about data strategies and implementation are often missing, limiting reproducibility and innovation. In this project, we focus on VLM post-training from a data-centric perspective, sharing insights into building effective data strategies from scratch. By combining these strategies with robust training recipes and model design, we introduce Eagle2, a family of performant VLMs. Our work aims to empower the open-source community to develop competitive VLMs with transparent processes.
In this repo, we are open-sourcing Eagle2-1B, a compact and efficient model designed for scenarios requiring fast inference and minimal computational resources, without compromising essential performance
## Model Zoo
We provide the following models:
| model name | LLM | Vision | Max Length| HF Link|
| ----------- | ------- |---------|-|-|
| Eagle2-1B | [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) | Siglip | 16K| [🤗 link](https://huggingface.co/NVIDIA/Eagle2-1B)|
| Eagle2-2B | [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) | Siglip | 16K| [🤗 link](https://huggingface.co/NVIDIA/Eagle2-2B)|
| Eagle2-9B | [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | Siglip+ConvNext | 16K| [🤗 link](https://huggingface.co/NVIDIA/Eagle2-9B)|
## Benchmark Results
| Benchmark | LLaVa-One-Vision-0.5B | InternVL2-1B | InternVL2.5-1B |Qwen2-VL-2B| Eagle2-1B|
| :--------------------------: | :------------------: | :----------------: | :----------: |:----------: |:----------: |
| DocVQA<sub>test</sub> | 70.0 | 81.7 | 84.8 |90.1|81.8|
| ChartQA<sub>test</sub> | 61.4 | 72.9 | 75.9 |73.0|77.0|
| InfoVQA<sub>test</sub> | 41.8 | 50.9 | 56.0 |65.5|54.8|
| TextVQA<sub>val</sub> | - | 70.0 | 72.0 |79.7|76.6|
| OCRBench | 565 | 754 | 785 |809|767|
| MME<sub>sum</sub> | 1438.0 | 1794.4 | 1950.5 | 1872.0| 1790.2|
| RealWorldQA | 55.6 | 50.3 | 57.5 |62.6|55.4|
| AI2D<sub>test</sub> | 57.1 | 64.1 | 69.3 | 74.7 |70.9|
| MMMU<sub>val</sub> | 31.4 | 36.7 | 40.9 |41.1|38.8|
| MMVet<sub>GPT-4-Turbo</sub> | 32.2 | 32.7 | 48.8 | 49.5|40.9| HallBench<sub>avg</sub> | 27.9 | 34.0 | 39.0 |**41.7**|35.3
| MathVista<sub>testmini</sub> | 33.8 | 37.7 | 43.2 |43.0|45.3|
| MMstar | 37.7 | 45.7 | 50.1|48.0|48.5|
## Quick Start
We provide a [inference script](./demo.py) to help you quickly start using the model. We support different input types:
- pure text input
- single image input
- multiple image input
- video input
### 0. Install the dependencies
```bash
pip install transformers==4.37.2
pip install flash-attn
```
**Note**: Latest version of transformers is not compatible with the model.
### 1. Prepare the Model worker
<details>
<summary>Click to expand</summary>
```python
"""
A model worker executes the model.
Copied and modified from https://github.com/OpenGVLab/InternVL/blob/main/streamlit_demo/model_worker.py
"""
# Importing torch before transformers can cause `segmentation fault`
from transformers import AutoModel, AutoTokenizer, TextIteratorStreamer, AutoConfig
import argparse
import base64
import json
import os
import decord
import threading
import time
from io import BytesIO
from threading import Thread
import math
import requests
import torch
import torchvision.transforms as T
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
import numpy as np
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
SIGLIP_MEAN = (0.5, 0.5, 0.5)
SIGLIP_STD = (0.5, 0.5, 0.5)
def get_seq_frames(total_num_frames, desired_num_frames=-1, stride=-1):
"""
Calculate the indices of frames to extract from a video.
Parameters:
total_num_frames (int): Total number of frames in the video.
desired_num_frames (int): Desired number of frames to extract.
Returns:
list: List of indices of frames to extract.
"""
assert desired_num_frames > 0 or stride > 0 and not (desired_num_frames > 0 and stride > 0)
if stride > 0:
return list(range(0, total_num_frames, stride))
# Calculate the size of each segment from which a frame will be extracted
seg_size = float(total_num_frames - 1) / desired_num_frames
seq = []
for i in range(desired_num_frames):
# Calculate the start and end indices of each segment
start = int(np.round(seg_size * i))
end = int(np.round(seg_size * (i + 1)))
# Append the middle index of the segment to the list
seq.append((start + end) // 2)
return seq
def build_video_prompt(meta_list, num_frames, time_position=False):
# if time_position is True, the frame_timestamp is used.
# 1. pass time_position, 2. use env TIME_POSITION
time_position = os.environ.get("TIME_POSITION", time_position)
prefix = f"This is a video:\n"
for i in range(num_frames):
if time_position:
frame_txt = f"Frame {i+1} sampled at {meta_list[i]:.2f} seconds: <image>\n"
else:
frame_txt = f"Frame {i+1}: <image>\n"
prefix += frame_txt
return prefix
def load_video(video_path, num_frames=64, frame_cache_root=None):
if isinstance(video_path, str):
video = decord.VideoReader(video_path)
elif isinstance(video_path, dict):
assert False, 'we not support vidoe: "video_path" as input'
fps = video.get_avg_fps()
sampled_frames = get_seq_frames(len(video), num_frames)
samepld_timestamps = [i / fps for i in sampled_frames]
frames = video.get_batch(sampled_frames).asnumpy()
images = [Image.fromarray(frame) for frame in frames]
return images, build_video_prompt(samepld_timestamps, len(images), time_position=True)
def load_image(image):
if isinstance(image, str) and os.path.exists(image):
return Image.open(image)
elif isinstance(image, dict):
if 'disk_path' in image:
return Image.open(image['disk_path'])
elif 'base64' in image:
return Image.open(BytesIO(base64.b64decode(image['base64'])))
elif 'url' in image:
response = requests.get(image['url'])
return Image.open(BytesIO(response.content))
elif 'bytes' in image:
return Image.open(BytesIO(image['bytes']))
else:
raise ValueError(f'Invalid image: {image}')
else:
raise ValueError(f'Invalid image: {image}')
def build_transform(input_size, norm_type='imagenet'):
if norm_type == 'imagenet':
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
elif norm_type == 'siglip':
MEAN, STD = SIGLIP_MEAN, SIGLIP_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
"""
previous version mainly foucs on ratio.
We also consider area ratio here.
"""
best_factor = float('-inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
area_ratio = (ratio[0]*ratio[1]*image_size*image_size)/ area
"""
new area > 60% of original image area is enough.
"""
factor_based_on_area_n_ratio = min((ratio[0]*ratio[1]*image_size*image_size)/ area, 0.6)* \
min(target_aspect_ratio/aspect_ratio, aspect_ratio/target_aspect_ratio)
if factor_based_on_area_n_ratio > best_factor:
best_factor = factor_based_on_area_n_ratio
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def split_model(model_path, device):
device_map = {}
world_size = torch.cuda.device_count()
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
num_layers = config.llm_config.num_hidden_layers
print('world_size', world_size)
num_layers_per_gpu_ = math.floor(num_layers / (world_size - 1))
num_layers_per_gpu = [num_layers_per_gpu_] * world_size
num_layers_per_gpu[device] = num_layers - num_layers_per_gpu_ * (world_size-1)
print(num_layers_per_gpu)
layer_cnt = 0
for i, num_layer in enumerate(num_layers_per_gpu):
for j in range(num_layer):
device_map[f'language_model.model.layers.{layer_cnt}'] = i
layer_cnt += 1
device_map['vision_model'] = device
device_map['mlp1'] = device
device_map['language_model.model.tok_embeddings'] = device
device_map['language_model.model.embed_tokens'] = device
device_map['language_model.output'] = device
device_map['language_model.model.norm'] = device
device_map['language_model.lm_head'] = device
device_map['language_model.model.rotary_emb'] = device
device_map[f'language_model.model.layers.{num_layers - 1}'] = device
return device_map
class ModelWorker:
def __init__(self, model_path, model_name,
load_8bit, device):
if model_path.endswith('/'):
model_path = model_path[:-1]
if model_name is None:
model_paths = model_path.split('/')
if model_paths[-1].startswith('checkpoint-'):
self.model_name = model_paths[-2] + '_' + model_paths[-1]
else:
self.model_name = model_paths[-1]
else:
self.model_name = model_name
print(f'Loading the model {self.model_name}')
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, use_fast=False)
tokens_to_keep = ['<box>', '</box>', '<ref>', '</ref>']
tokenizer.additional_special_tokens = [item for item in tokenizer.additional_special_tokens if item not in tokens_to_keep]
self.tokenizer = tokenizer
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
model_type = config.vision_config.model_type
self.device = torch.cuda.current_device()
if model_type == 'siglip_vision_model':
self.norm_type = 'siglip'
elif model_type == 'MOB':
self.norm_type = 'siglip'
else:
self.norm_type = 'imagenet'
if any(x in model_path.lower() for x in ['34b']):
device_map = split_model(model_path, self.device)
else:
device_map = None
if device_map is not None:
self.model = AutoModel.from_pretrained(model_path, torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
device_map=device_map,
trust_remote_code=True,
load_in_8bit=load_8bit).eval()
else:
self.model = AutoModel.from_pretrained(model_path, torch_dtype=torch.bfloat16,
trust_remote_code=True,
load_in_8bit=load_8bit).eval()
if not load_8bit and device_map is None:
self.model = self.model.to(device)
self.load_8bit = load_8bit
self.model_path = model_path
self.image_size = self.model.config.force_image_size
self.context_len = tokenizer.model_max_length
self.per_tile_len = 256
def reload_model(self):
del self.model
torch.cuda.empty_cache()
if self.device == 'auto':
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
# This can make distributed deployment work properly
self.model = AutoModel.from_pretrained(
self.model_path,
load_in_8bit=self.load_8bit,
torch_dtype=torch.bfloat16,
device_map=self.device_map,
trust_remote_code=True).eval()
else:
self.model = AutoModel.from_pretrained(
self.model_path,
load_in_8bit=self.load_8bit,
torch_dtype=torch.bfloat16,
trust_remote_code=True).eval()
if not self.load_8bit and not self.device == 'auto':
self.model = self.model.cuda()
@torch.inference_mode()
def generate(self, params):
system_message = params['prompt'][0]['content']
send_messages = params['prompt'][1:]
max_input_tiles = params['max_input_tiles']
temperature = params['temperature']
top_p = params['top_p']
max_new_tokens = params['max_new_tokens']
repetition_penalty = params['repetition_penalty']
video_frame_num = params.get('video_frame_num', 64)
do_sample = True if temperature > 0.0 else False
global_image_cnt = 0
history, pil_images, max_input_tile_list = [], [], []
for message in send_messages:
if message['role'] == 'user':
prefix = ''
if 'image' in message:
for image_data in message['image']:
pil_images.append(load_image(image_data))
prefix = prefix + f'<image {global_image_cnt + 1}><image>\n'
global_image_cnt += 1
max_input_tile_list.append(max_input_tiles)
if 'video' in message:
for video_data in message['video']:
video_frames, tmp_prefix = load_video(video_data, num_frames=video_frame_num)
pil_images.extend(video_frames)
prefix = prefix + tmp_prefix
global_image_cnt += len(video_frames)
max_input_tile_list.extend([1] * len(video_frames))
content = prefix + message['content']
history.append([content, ])
else:
history[-1].append(message['content'])
question, history = history[-1][0], history[:-1]
if global_image_cnt == 1:
question = question.replace('<image 1><image>\n', '<image>\n')
history = [[item[0].replace('<image 1><image>\n', '<image>\n'), item[1]] for item in history]
try:
assert len(max_input_tile_list) == len(pil_images), 'The number of max_input_tile_list and pil_images should be the same.'
except Exception as e:
from IPython import embed; embed()
exit()
print(f'Error: {e}')
print(f'max_input_tile_list: {max_input_tile_list}, pil_images: {pil_images}')
# raise e
old_system_message = self.model.system_message
self.model.system_message = system_message
transform = build_transform(input_size=self.image_size, norm_type=self.norm_type)
if len(pil_images) > 0:
max_input_tiles_limited_by_contect = params['max_input_tiles']
while True:
image_tiles = []
for current_max_input_tiles, pil_image in zip(max_input_tile_list, pil_images):
if self.model.config.dynamic_image_size:
tiles = dynamic_preprocess(
pil_image, image_size=self.image_size, max_num=min(current_max_input_tiles, max_input_tiles_limited_by_contect),
use_thumbnail=self.model.config.use_thumbnail)
else:
tiles = [pil_image]
image_tiles += tiles
if (len(image_tiles) * self.per_tile_len < self.context_len):
break
else:
max_input_tiles_limited_by_contect -= 2
if max_input_tiles_limited_by_contect < 1:
break
pixel_values = [transform(item) for item in image_tiles]
pixel_values = torch.stack(pixel_values).to(self.model.device, dtype=torch.bfloat16)
print(f'Split images to {pixel_values.shape}')
else:
pixel_values = None
generation_config = dict(
num_beams=1,
max_new_tokens=max_new_tokens,
do_sample=do_sample,
temperature=temperature,
repetition_penalty=repetition_penalty,
max_length=self.context_len,
top_p=top_p,
)
response = self.model.chat(
tokenizer=self.tokenizer,
pixel_values=pixel_values,
question=question,
history=history,
return_history=False,
generation_config=generation_config,
)
self.model.system_message = old_system_message
return {'text': response, 'error_code': 0}
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--model-path', type=str, default='NVIDIA/Eagle-2-1B')
parser.add_argument('--model-name', type=str, default='Eagle-2-1B')
parser.add_argument('--device', type=str, default='cuda')
parser.add_argument('--load-8bit', action='store_true')
args = parser.parse_args()
print(f'args: {args}')
worker = ModelWorker(
args.model_path,
args.model_name,
args.load_8bit,
args.device)
```
</details>
### 2. Prepare the Prompt
- Single image input
```python
prompt = [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': 'Describe this image in details.',
'image':[
{'url': 'https://www.nvidia.com/content/dam/en-zz/Solutions/about-nvidia/logo-and-brand/[email protected]'}
],
}
]
```
- Multiple image input
```python
prompt = [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': 'Describe these two images in details.',
'image':[
{'url': 'https://www.nvidia.com/content/dam/en-zz/Solutions/about-nvidia/logo-and-brand/[email protected]'},
{'url': 'https://www.nvidia.com/content/dam/en-zz/Solutions/about-nvidia/logo-and-brand/[email protected]'}
],
}
]
```
- Video input
```python
prompt = [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': 'Describe this video in details.',
'video':[
'path/to/your/video.mp4'
],
}
]
```
### 3. Generate the response
```python
params = {
'prompt': prompt,
'max_input_tiles': 24,
'temperature': 0.7,
'top_p': 1.0,
'max_new_tokens': 4096,
'repetition_penalty': 1.0,
}
worker.generate(params)
```
## TODO
- [ ] Support vLLM Inference
- [ ] Provide AWQ Quantization Weights
- [ ] Provide fine-tuning scripts
## License/Terms of Use
- The code is released under the Apache 2.0 license as found in the [LICENSE](https://huggingface.co/NVEagle/Eagle-X5-13B-Chat/blob/main/LICENSE) file.
- The pretrained model weights are released under the [Creative Commons Attribution: Non-Commercial 4.0 International](https://spdx.org/licenses/CC-BY-NC-4.0) <br>
- The service is a research preview intended for non-commercial use only, and is subject to the following licenses and terms:
- Model License of Qwen2.5-0.5B-Instruct: [Apache-2.0](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE)
- Model License of PaliGemma: [Gemma license](https://ai.google.dev/gemma/terms)
## Citation
## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
omarwaleed523/qwen3-8b-arabic-multitask | omarwaleed523 | 2025-04-30T12:18:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T12:18:49Z | ---
base_model: unsloth/Qwen3-8B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** omarwaleed523
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vishnump1/Llama-3.1-8b-reasoning-16bit | vishnump1 | 2025-04-30T12:17:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T12:14:38Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** vishnump1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ijterror/KeiKniFluxLora | ijterror | 2025-04-30T12:17:04Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-29T12:16:50Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: krknghtly
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Keira Knightley Lora
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `krknghtly` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
mradermacher/Qwen2.5-0.5B-Open-R1-Distill-GGUF | mradermacher | 2025-04-30T12:16:55Z | 138 | 1 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:jdqqjr/Qwen2.5-0.5B-Open-R1-Distill",
"base_model:quantized:jdqqjr/Qwen2.5-0.5B-Open-R1-Distill",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-05T21:35:48Z | ---
base_model: jdqqjr/Qwen2.5-0.5B-Open-R1-Distill
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
model_name: Qwen2.5-0.5B-Open-R1-Distill
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jdqqjr/Qwen2.5-0.5B-Open-R1-Distill
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Open-R1-Distill-GGUF/resolve/main/Qwen2.5-0.5B-Open-R1-Distill.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Open-R1-Distill-GGUF/resolve/main/Qwen2.5-0.5B-Open-R1-Distill.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Open-R1-Distill-GGUF/resolve/main/Qwen2.5-0.5B-Open-R1-Distill.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Open-R1-Distill-GGUF/resolve/main/Qwen2.5-0.5B-Open-R1-Distill.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Open-R1-Distill-GGUF/resolve/main/Qwen2.5-0.5B-Open-R1-Distill.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Open-R1-Distill-GGUF/resolve/main/Qwen2.5-0.5B-Open-R1-Distill.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Open-R1-Distill-GGUF/resolve/main/Qwen2.5-0.5B-Open-R1-Distill.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Open-R1-Distill-GGUF/resolve/main/Qwen2.5-0.5B-Open-R1-Distill.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Open-R1-Distill-GGUF/resolve/main/Qwen2.5-0.5B-Open-R1-Distill.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Open-R1-Distill-GGUF/resolve/main/Qwen2.5-0.5B-Open-R1-Distill.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Open-R1-Distill-GGUF/resolve/main/Qwen2.5-0.5B-Open-R1-Distill.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Open-R1-Distill-GGUF/resolve/main/Qwen2.5-0.5B-Open-R1-Distill.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
KKrueger/entex_pre | KKrueger | 2025-04-30T12:16:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-04-30T12:15:23Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
model-index:
- name: entex_pre
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# entex_pre
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2082
- F1 Macro: 0.8500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0679 | 1.0 | 26 | 0.5775 | 0.2761 |
| 0.3311 | 2.0 | 52 | 0.2341 | 0.7785 |
| 0.186 | 3.0 | 78 | 0.2036 | 0.8220 |
| 0.1366 | 4.0 | 104 | 0.1852 | 0.8459 |
| 0.1003 | 5.0 | 130 | 0.1907 | 0.8516 |
| 0.0793 | 6.0 | 156 | 0.2027 | 0.8486 |
| 0.0668 | 7.0 | 182 | 0.2082 | 0.8500 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
mlx-community/InternVL3-14B-4bit | mlx-community | 2025-04-30T12:16:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"custom_code",
"mlx",
"image-text-to-text",
"conversational",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"base_model:OpenGVLab/InternVL3-1B-Instruct",
"base_model:finetune:OpenGVLab/InternVL3-1B-Instruct",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2025-04-30T12:10:01Z | ---
license: apache-2.0
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- OpenGVLab/InternVL3-1B-Instruct
base_model_relation: finetune
datasets:
- OpenGVLab/MMPR-v1.2
language:
- multilingual
tags:
- internvl
- custom_code
- mlx
---
# mlx-community/InternVL3-14B-4bit
This model was converted to MLX format from [`models/InternVL3-14B`]() using mlx-vlm version **0.1.25**.
Refer to the [original model card](https://huggingface.co/models/InternVL3-14B) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/InternVL3-14B-4bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
arnav-yug/charu | arnav-yug | 2025-04-30T12:15:40Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-04-30T12:15:40Z | ---
license: bigcode-openrail-m
---
|
niklasm222/qwen2.5-3b-1.75k-prolog-sp-struct-rwd1-LoRA-vibrant-sweep-3 | niklasm222 | 2025-04-30T12:15:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T12:14:59Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** niklasm222
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dimasik2987/409c6df8-5bab-44e3-9286-6394ded1480e | dimasik2987 | 2025-04-30T12:10:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:numind/NuExtract-1.5",
"base_model:adapter:numind/NuExtract-1.5",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-30T11:44:51Z | ---
library_name: peft
license: mit
base_model: numind/NuExtract-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 409c6df8-5bab-44e3-9286-6394ded1480e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: numind/NuExtract-v1.5
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- a9326f7302eddb19_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a9326f7302eddb19_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: dimasik2987/409c6df8-5bab-44e3-9286-6394ded1480e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 12
mixed_precision: bf16
mlflow_experiment_name: /tmp/a9326f7302eddb19_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: db4aa502-a89d-4dcd-8f95-900d53e22269
wandb_project: s56-28
wandb_run: your_name
wandb_runid: db4aa502-a89d-4dcd-8f95-900d53e22269
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 409c6df8-5bab-44e3-9286-6394ded1480e
This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.457 | 0.0241 | 200 | 0.4513 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ijterror/EmiRatFluxLora | ijterror | 2025-04-30T12:08:20Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-29T12:17:15Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mlyrtjkwsk
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Emily Ratajkowski Lora
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `mlyrtjkwsk` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
nytopop/Qwen3-30B-A3B.w8a8 | nytopop | 2025-04-30T12:08:05Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"compressed-tensors",
"region:us"
] | text-generation | 2025-04-30T06:01:03Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-30B-A3B
---
Int8 quant for optimized performance on Ampere.
# usage with sglang
Currently, upstream sglang doesn't load this quant correctly due to a few minor issues. Until upstream is fixed, a working fork is available at https://github.com/nytopop/sglang/tree/qwen-30b-a3b:
```shell
uv venv --python 3.12
# use patched sglang from git
uv pip install "git+https://github.com/nytopop/sglang.git@qwen-30b-a3b#subdirectory=python[all]" --find-links https://flashinfer.ai/whl/cu124/torch2.5/flashinfer-python
# run
uv run python -m sglang.launch_server --model-path nytopop/Qwen3-30B-A3B.w8a8 --quantization w8a8_int8 --reasoning-parser qwen3
```
# creation
```python
from transformers import AutoModelForCausalLM
from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers.compression.helpers import calculate_offload_device_map
model_id = "Qwen/Qwen3-30B-A3B"
model_out = model_id.split("/")[1] + ".w8a8"
device_map = calculate_offload_device_map(
model_id, reserve_for_hessians=False, num_gpus=1, torch_dtype="bfloat16"
)
for k, v in device_map.items():
if v == 'disk':
device_map[k] = 'cpu'
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map=device_map,
torch_dtype="bfloat16",
)
recipe = QuantizationModifier(targets="Linear", scheme="W8A8", ignore=["lm_head", "re:.*mlp.gate$"])
oneshot(model=model, recipe=recipe, output_dir=model_out)
```
|
fats-fme/3c1ca831-d1c7-4606-a1b6-1bdfc611ff60 | fats-fme | 2025-04-30T12:07:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:numind/NuExtract-1.5",
"base_model:adapter:numind/NuExtract-1.5",
"license:mit",
"region:us"
] | null | 2025-04-30T11:43:05Z | ---
library_name: peft
license: mit
base_model: numind/NuExtract-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3c1ca831-d1c7-4606-a1b6-1bdfc611ff60
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: numind/NuExtract-v1.5
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a9326f7302eddb19_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a9326f7302eddb19_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/3c1ca831-d1c7-4606-a1b6-1bdfc611ff60
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 130GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/a9326f7302eddb19_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: db4aa502-a89d-4dcd-8f95-900d53e22269
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: db4aa502-a89d-4dcd-8f95-900d53e22269
warmup_steps: 200
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 3c1ca831-d1c7-4606-a1b6-1bdfc611ff60
This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 0.5980 |
| 0.7336 | 0.0080 | 100 | 0.1882 |
| 0.7057 | 0.0161 | 200 | 0.1803 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
AXERA-TECH/Qwen3-4B | AXERA-TECH | 2025-04-30T12:07:26Z | 0 | 0 | null | [
"Qwen",
"Qwen3",
"Int8",
"text-generation",
"en",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-04-30T09:26:37Z | ---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-4B
pipeline_tag: text-generation
tags:
- Qwen
- Qwen3
- Int8
---
# Qwen3-4B-Int8
This version of Qwen3-4B-Int8 has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 4.0-temp(Not released yet)
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through the original repo :
https://huggingface.co/Qwen/Qwen3-4B
[Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)
[AXera NPU LLM Runtime](https://github.com/AXERA-TECH/ax-llm)
## Support Platform
- AX650
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
|Chips|w8a16|w4a16|
|--|--|--|
|AX650| 4.5 tokens/sec|TBD|
## How to use
Download all files from this repository to the device
```
root@ax650:/mnt/qtang/llm-test/qwen3-4b# tree -L 1
.
|-- config.json
|-- main_ax650
|-- main_axcl_aarch64
|-- main_axcl_x86
|-- post_config.json
|-- qwen2.5_tokenizer
|-- qwen3-4b-ax650
|-- qwen3_tokenizer
|-- qwen3_tokenizer_uid.py
|-- run_qwen3_4b_int8_ctx_ax650.sh
|-- run_qwen3_4b_int8_ctx_axcl_aarch64.sh
`-- run_qwen3_4b_int8_ctx_axcl_x86.sh
3 directories, 9 files
root@ax650:/mnt/qtang/llm-test/qwen3-4b#
```
#### Start the Tokenizer service
Install requirement
```
pip install transformers jinja2
```
```
root@ax650:/mnt/qtang/llm-test/qwen3-4b# python3 qwen3_tokenizer_uid.py
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Server running at http://0.0.0.0:12345
```
#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) or AX650N DEMO Board
Open another terminal and run `run_qwen3_4b_int8_ctx_ax650.sh`
```
root@ax650:/mnt/qtang/llm-test/qwen3-4b# ./run_qwen3_4b_int8_ctx_ax650.sh
[I][ Init][ 110]: LLM init start
[I][ Init][ 34]: connect http://127.0.0.1:12345 ok
[I][ Init][ 57]: uid: 6e90ff82-b9c9-42dc-8f61-081203389166
bos_id: -1, eos_id: 151645
2% | █ | 1 / 39 [3.95s<153.89s, 0.25 count/s] tokenizer init ok
[I][ Init][ 26]: LLaMaEmbedSelector use mmap
100% | ████████████████████████████████ | 39 / 39 [48.03s<48.03s, 0.81 count/s] init post axmodel ok,remain_cmm(5621 MB)
[I][ Init][ 188]: max_token_len : 2559
[I][ Init][ 193]: kv_cache_size : 1024, kv_cache_num: 2559
[I][ Init][ 201]: prefill_token_num : 128
[I][ Init][ 205]: grp: 1, prefill_max_token_num : 1
[I][ Init][ 205]: grp: 2, prefill_max_token_num : 256
[I][ Init][ 205]: grp: 3, prefill_max_token_num : 512
[I][ Init][ 205]: grp: 4, prefill_max_token_num : 1024
[I][ Init][ 205]: grp: 5, prefill_max_token_num : 1536
[I][ Init][ 205]: grp: 6, prefill_max_token_num : 2048
[I][ Init][ 209]: prefill_max_token_num : 2048
[I][ load_config][ 282]: load config:
{
"enable_repetition_penalty": false,
"enable_temperature": false,
"enable_top_k_sampling": true,
"enable_top_p_sampling": false,
"penalty_window": 20,
"repetition_penalty": 1.2,
"temperature": 0.9,
"top_k": 1,
"top_p": 0.8
}
[I][ Init][ 218]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
[I][ GenerateKVCachePrefill][ 270]: input token num : 21, prefill_split_num : 1 prefill_grpid : 2
[I][ GenerateKVCachePrefill][ 307]: input_num_token:21
[I][ main][ 230]: precompute_len: 21
[I][ main][ 231]: system_prompt: You are Qwen, created by Alibaba Cloud. You are a helpful assistant.
prompt >> 1+3=?
[I][ SetKVCache][ 530]: prefill_grpid:2 kv_cache_num:256 precompute_len:21 input_num_token:16
[I][ SetKVCache][ 533]: current prefill_max_token_num:1920
[I][ Run][ 659]: input token num : 16, prefill_split_num : 1
[I][ Run][ 685]: input_num_token:16
[I][ Run][ 808]: ttft: 1169.05 ms
<think>
</think>
1 + 3 = 4
[N][ Run][ 922]: hit eos,avg 4.22 token/s
[I][ GetKVCache][ 499]: precompute_len:48, remaining:2000
prompt >> who are you?
[I][ SetKVCache][ 530]: prefill_grpid:2 kv_cache_num:256 precompute_len:48 input_num_token:16
[I][ SetKVCache][ 533]: current prefill_max_token_num:1920
[I][ Run][ 659]: input token num : 16, prefill_split_num : 1
[I][ Run][ 685]: input_num_token:16
[I][ Run][ 808]: ttft: 1168.56 ms
<think>
</think>
I am Qwen, a large-scale language model developed by Alibaba Cloud. I can answer questions, create content,
and help with a variety of tasks. How can I assist you today?
[N][ Run][ 922]: hit eos,avg 4.22 token/s
[I][ GetKVCache][ 499]: precompute_len:106, remaining:1942
prompt >> q
root@ax650:/mnt/qtang/llm-test/qwen3-4b#
```
#### Inference with M.2 Accelerator card
[What is M.2 Accelerator card?](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html), Show this DEMO based on Raspberry PI 5.
```
(base) axera@raspberrypi:~/samples/qwen3-4b $ ./run_qwen3_4b_int8_ctx_axcl_aarch64.sh
[I][ Init][ 136]: LLM init start
[I][ Init][ 34]: connect http://127.0.0.1:12345 ok
[I][ Init][ 57]: uid: a5b1e427-0cdf-4da6-b3a7-f5e0517da0bb
bos_id: -1, eos_id: 151645
2% | █ | 1 / 39 [0.99s<38.45s, 1.01 count/s] tokenizer init ok
[I][ Init][ 45]: LLaMaEmbedSelector use mmap
5% | ██ | 2 / 39 [0.99s<19.23s, 2.03 count/s] embed_selector init ok
[I][ run][ 30]: AXCLWorker start with devid 0
100% | ████████████████████████████████ | 39 / 39 [133.16s<133.16s, 0.29 count/s] init post axmodel ok,remain_cmm(691 MB)(1096 MB)000000000
[I][ Init][ 237]: max_token_len : 2559
[I][ Init][ 240]: kv_cache_size : 1024, kv_cache_num: 2559
[I][ Init][ 248]: prefill_token_num : 128
[I][ Init][ 252]: grp: 1, prefill_max_token_num : 1
[I][ Init][ 252]: grp: 2, prefill_max_token_num : 256
[I][ Init][ 252]: grp: 3, prefill_max_token_num : 512
[I][ Init][ 252]: grp: 4, prefill_max_token_num : 1024
[I][ Init][ 252]: grp: 5, prefill_max_token_num : 1536
[I][ Init][ 252]: grp: 6, prefill_max_token_num : 2048
[I][ Init][ 256]: prefill_max_token_num : 2048
________________________
| ID| remain cmm(MB)|
========================
| 0| 691|
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
[I][ load_config][ 282]: load config:
{
"enable_repetition_penalty": false,
"enable_temperature": false,
"enable_top_k_sampling": true,
"enable_top_p_sampling": false,
"penalty_window": 20,
"repetition_penalty": 1.2,
"temperature": 0.9,
"top_k": 1,
"top_p": 0.8
}
[I][ Init][ 279]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
[I][ GenerateKVCachePrefill][ 335]: input token num : 21, prefill_split_num : 1 prefill_grpid : 2
[I][ GenerateKVCachePrefill][ 372]: input_num_token:21
[I][ main][ 236]: precompute_len: 21
[I][ main][ 237]: system_prompt: You are Qwen, created by Alibaba Cloud. You are a helpful assistant.
prompt >> who are you
[I][ SetKVCache][ 628]: prefill_grpid:2 kv_cache_num:256 precompute_len:21 input_num_token:27
[I][ SetKVCache][ 631]: current prefill_max_token_num:1920
[I][ Run][ 869]: input token num : 27, prefill_split_num : 1
[I][ Run][ 901]: input_num_token:27
[I][ Run][1030]: ttft: 1339.01 ms
<think>
</think>
I am Qwen, a large-scale language model developed by Alibaba Cloud. I can answer questions,
create content, and help with a variety of tasks. What can I assist you with?
[N][ Run][1182]: hit eos,avg 3.65 token/s
[I][ GetKVCache][ 597]: precompute_len:90, remaining:1958
prompt >> q
[I][ run][ 80]: AXCLWorker exit with devid 0
(base) axera@raspberrypi:~/samples/qwen3-4b $
(base) axera@raspberrypi:~ $ axcl-smi
+------------------------------------------------------------------------------------------------+
| AXCL-SMI V3.4.0_20250423020139 Driver V3.4.0_20250423020139 |
+-----------------------------------------+--------------+---------------------------------------+
| Card Name Firmware | Bus-Id | Memory-Usage |
| Fan Temp Pwr:Usage/Cap | CPU NPU | CMM-Usage |
|=========================================+==============+=======================================|
| 0 AX650N V3.4.0 | 0000:01:00.0 | 193 MiB / 945 MiB |
| -- 37C -- / -- | 2% 0% | 6348 MiB / 7040 MiB |
+-----------------------------------------+--------------+---------------------------------------+
+------------------------------------------------------------------------------------------------+
| Processes: |
| Card PID Process Name NPU Memory Usage |
|================================================================================================|
| 0 84643 /home/axera/samples/qwen3-4b/main_axcl_aarch64 4894032 KiB |
+------------------------------------------------------------------------------------------------+
(base) axera@raspberrypi:~ $
``` |
RichardErkhov/leekh7624_-_mymodel2-gguf | RichardErkhov | 2025-04-30T12:07:14Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-30T04:02:37Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mymodel2 - GGUF
- Model creator: https://huggingface.co/leekh7624/
- Original model: https://huggingface.co/leekh7624/mymodel2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mymodel2.Q2_K.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q2_K.gguf) | Q2_K | 2.96GB |
| [mymodel2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [mymodel2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [mymodel2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [mymodel2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [mymodel2.Q3_K.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q3_K.gguf) | Q3_K | 3.74GB |
| [mymodel2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [mymodel2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [mymodel2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [mymodel2.Q4_0.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q4_0.gguf) | Q4_0 | 4.34GB |
| [mymodel2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [mymodel2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [mymodel2.Q4_K.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q4_K.gguf) | Q4_K | 4.58GB |
| [mymodel2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [mymodel2.Q4_1.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q4_1.gguf) | Q4_1 | 4.78GB |
| [mymodel2.Q5_0.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q5_0.gguf) | Q5_0 | 5.21GB |
| [mymodel2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [mymodel2.Q5_K.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q5_K.gguf) | Q5_K | 5.34GB |
| [mymodel2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [mymodel2.Q5_1.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q5_1.gguf) | Q5_1 | 5.65GB |
| [mymodel2.Q6_K.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q6_K.gguf) | Q6_K | 6.14GB |
| [mymodel2.Q8_0.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_mymodel2-gguf/blob/main/mymodel2.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
base_model: beomi/Llama-3-Open-Ko-8B-Instruct-preview
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** leekh7624
- **License:** apache-2.0
- **Finetuned from model :** beomi/Llama-3-Open-Ko-8B-Instruct-preview
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mhlongoke91/asr_finetuned | mhlongoke91 | 2025-04-30T12:01:02Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-23T18:27:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/leekh7624_-_model3-gguf | RichardErkhov | 2025-04-30T11:59:42Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-30T03:54:24Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
model3 - GGUF
- Model creator: https://huggingface.co/leekh7624/
- Original model: https://huggingface.co/leekh7624/model3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [model3.Q2_K.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q2_K.gguf) | Q2_K | 2.96GB |
| [model3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [model3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [model3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [model3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [model3.Q3_K.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q3_K.gguf) | Q3_K | 3.74GB |
| [model3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [model3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [model3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [model3.Q4_0.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q4_0.gguf) | Q4_0 | 4.34GB |
| [model3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [model3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [model3.Q4_K.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q4_K.gguf) | Q4_K | 4.58GB |
| [model3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [model3.Q4_1.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q4_1.gguf) | Q4_1 | 4.78GB |
| [model3.Q5_0.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q5_0.gguf) | Q5_0 | 5.21GB |
| [model3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [model3.Q5_K.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q5_K.gguf) | Q5_K | 5.34GB |
| [model3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [model3.Q5_1.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q5_1.gguf) | Q5_1 | 5.65GB |
| [model3.Q6_K.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q6_K.gguf) | Q6_K | 6.14GB |
| [model3.Q8_0.gguf](https://huggingface.co/RichardErkhov/leekh7624_-_model3-gguf/blob/main/model3.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
base_model: leekh7624/model2
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** leekh7624
- **License:** apache-2.0
- **Finetuned from model :** leekh7624/model2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
VIOLET21/sentiment-bert-tweetx | VIOLET21 | 2025-04-30T11:58:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"sentiment-analysis",
"id",
"en",
"arxiv:1910.09700",
"base_model:indobenchmark/indobert-base-p1",
"base_model:finetune:indobenchmark/indobert-base-p1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-30T11:34:43Z | ---
library_name: transformers
tags:
- sentiment-analysis
- bert
- text-classification
license: apache-2.0
language:
- id
- en
base_model: indobenchmark/indobert-base-p1
pipeline_tag: text-classification
metrics:
- accuracy
---
# Sentiment BERT Tweet
A BERT model fine-tuned for Indonesian tweet sentiment classification.
This model classifies tweets into three sentiment categories:
- **Positive**
- **Negative**
- **Neutral**
## How to Use
```python
from transformers import pipeline
model = pipeline("text-classification", model="VIOLET21/sentiment-bert-indo")
result = model("Saya sangat senang hari ini!")
print(result)
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vertings6/9d4daea4-8a2d-44ad-8f6c-0f32e3947423 | vertings6 | 2025-04-30T11:57:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:numind/NuExtract-1.5",
"base_model:adapter:numind/NuExtract-1.5",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-30T11:41:46Z | ---
library_name: peft
license: mit
base_model: numind/NuExtract-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9d4daea4-8a2d-44ad-8f6c-0f32e3947423
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: numind/NuExtract-v1.5
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- a9326f7302eddb19_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a9326f7302eddb19_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 144
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vertings6/9d4daea4-8a2d-44ad-8f6c-0f32e3947423
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 3.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mixed_precision: bf16
mlflow_experiment_name: /tmp/a9326f7302eddb19_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: db4aa502-a89d-4dcd-8f95-900d53e22269
wandb_project: s56-32
wandb_run: your_name
wandb_runid: db4aa502-a89d-4dcd-8f95-900d53e22269
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9d4daea4-8a2d-44ad-8f6c-0f32e3947423
This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6242 | 0.0161 | 200 | 0.3023 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
JayHyeon/Qwen_0.5-CPO-5e-7-3ep | JayHyeon | 2025-04-30T11:57:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"cpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2401.08417",
"base_model:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"base_model:finetune:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T06:42:25Z | ---
base_model: JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: Qwen_0.5-CPO-5e-7-3ep
tags:
- generated_from_trainer
- trl
- cpo
licence: license
---
# Model Card for Qwen_0.5-CPO-5e-7-3ep
This model is a fine-tuned version of [JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep](https://huggingface.co/JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-CPO-5e-7-3ep", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/urw0k33q)
This model was trained with CPO, a method introduced in [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417).
### Framework versions
- TRL: 0.13.0.dev0
- Transformers: 4.47.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite CPO as:
```bibtex
@inproceedings{xu2024contrastive,
title = {{Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}},
author = {Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim},
year = 2024,
booktitle = {Forty-first International Conference on Machine Learning, {ICML} 2024, Vienna, Austria, July 21-27, 2024},
publisher = {OpenReview.net},
url = {https://openreview.net/forum?id=51iwkioZpn}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Katrinart/glvkatrin | Katrinart | 2025-04-30T11:56:45Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-30T11:12:58Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: glvkatrin
---
# Glvkatrin
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `glvkatrin` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "glvkatrin",
"lora_weights": "https://huggingface.co/Katrinart/glvkatrin/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Katrinart/glvkatrin', weight_name='lora.safetensors')
image = pipeline('glvkatrin').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 3000
- Learning rate: 0.0004
- LoRA rank: 64
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Katrinart/glvkatrin/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/s1.1-0.5B-GGUF | mradermacher | 2025-04-30T11:56:21Z | 147 | 1 | transformers | [
"transformers",
"gguf",
"ar",
"de",
"en",
"es",
"fr",
"it",
"ja",
"ko",
"pt",
"ru",
"th",
"vi",
"zh",
"dataset:simplescaling/s1K-1.1",
"base_model:2stacks/s1.1-0.5B",
"base_model:quantized:2stacks/s1.1-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-18T02:45:03Z | ---
base_model: 2stacks/s1.1-0.5B
datasets:
- simplescaling/s1K-1.1
language:
- ar
- de
- en
- es
- fr
- it
- ja
- ko
- pt
- ru
- th
- vi
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/2stacks/s1.1-0.5B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/s1.1-0.5B-GGUF/resolve/main/s1.1-0.5B.Q3_K_S.gguf) | Q3_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/s1.1-0.5B-GGUF/resolve/main/s1.1-0.5B.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/s1.1-0.5B-GGUF/resolve/main/s1.1-0.5B.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/s1.1-0.5B-GGUF/resolve/main/s1.1-0.5B.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/s1.1-0.5B-GGUF/resolve/main/s1.1-0.5B.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/s1.1-0.5B-GGUF/resolve/main/s1.1-0.5B.Q4_K_S.gguf) | Q4_K_S | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/s1.1-0.5B-GGUF/resolve/main/s1.1-0.5B.Q4_K_M.gguf) | Q4_K_M | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/s1.1-0.5B-GGUF/resolve/main/s1.1-0.5B.Q5_K_S.gguf) | Q5_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/s1.1-0.5B-GGUF/resolve/main/s1.1-0.5B.Q5_K_M.gguf) | Q5_K_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/s1.1-0.5B-GGUF/resolve/main/s1.1-0.5B.Q6_K.gguf) | Q6_K | 0.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/s1.1-0.5B-GGUF/resolve/main/s1.1-0.5B.Q8_0.gguf) | Q8_0 | 0.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/s1.1-0.5B-GGUF/resolve/main/s1.1-0.5B.f16.gguf) | f16 | 1.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
niklasm222/qwen2.5-3b-1.75k-prolog-sp-struct-rwd1-deft-sweep-2 | niklasm222 | 2025-04-30T11:56:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T11:54:24Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** niklasm222
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/s1.1-1.5B-GGUF | mradermacher | 2025-04-30T11:53:57Z | 161 | 2 | transformers | [
"transformers",
"gguf",
"en",
"fr",
"zh",
"es",
"pt",
"de",
"it",
"ru",
"ja",
"ko",
"vi",
"th",
"ar",
"dataset:simplescaling/s1K-1.1",
"base_model:2stacks/s1.1-1.5B",
"base_model:quantized:2stacks/s1.1-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-19T08:21:15Z | ---
base_model: 2stacks/s1.1-1.5B
datasets:
- simplescaling/s1K-1.1
language:
- en
- fr
- zh
- es
- pt
- de
- it
- ru
- ja
- ko
- vi
- th
- ar
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/2stacks/s1.1-1.5B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/s1.1-1.5B-GGUF/resolve/main/s1.1-1.5B.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/s1.1-1.5B-GGUF/resolve/main/s1.1-1.5B.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/s1.1-1.5B-GGUF/resolve/main/s1.1-1.5B.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/s1.1-1.5B-GGUF/resolve/main/s1.1-1.5B.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/s1.1-1.5B-GGUF/resolve/main/s1.1-1.5B.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/s1.1-1.5B-GGUF/resolve/main/s1.1-1.5B.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/s1.1-1.5B-GGUF/resolve/main/s1.1-1.5B.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/s1.1-1.5B-GGUF/resolve/main/s1.1-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/s1.1-1.5B-GGUF/resolve/main/s1.1-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/s1.1-1.5B-GGUF/resolve/main/s1.1-1.5B.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/s1.1-1.5B-GGUF/resolve/main/s1.1-1.5B.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/s1.1-1.5B-GGUF/resolve/main/s1.1-1.5B.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
maksf8486/7528563f-58f0-489b-a445-63e914b75269 | maksf8486 | 2025-04-30T11:53:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-koNqa-test-v1",
"base_model:adapter:oopsung/llama2-7b-koNqa-test-v1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-30T11:35:00Z | ---
library_name: peft
base_model: oopsung/llama2-7b-koNqa-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7528563f-58f0-489b-a445-63e914b75269
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: oopsung/llama2-7b-koNqa-test-v1
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 18272c611684fe78_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/18272c611684fe78_train_data.json
type:
field_input: plan
field_instruction: goal
field_output: revision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: false
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: maksf8486/7528563f-58f0-489b-a445-63e914b75269
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/18272c611684fe78_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 64d42576-342f-49b8-be0c-dc909aea067c
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 64d42576-342f-49b8-be0c-dc909aea067c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7528563f-58f0-489b-a445-63e914b75269
This model is a fine-tuned version of [oopsung/llama2-7b-koNqa-test-v1](https://huggingface.co/oopsung/llama2-7b-koNqa-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6221 | 0.0280 | 200 | 1.6370 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hangzou/llama-3-2-3b-math-orca-qlora-10k-ep1 | hangzou | 2025-04-30T11:51:53Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T10:49:19Z | ---
base_model: meta-llama/Llama-3.2-3B
library_name: transformers
model_name: llama-3-2-3b-math-orca-qlora-10k-ep1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama-3-2-3b-math-orca-qlora-10k-ep1
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hangzou/llama-3-2-3b-math-orca-qlora-10k-ep1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.4.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Gensyn/Qwen2.5-32B-Instruct-bnb-4bit | Gensyn | 2025-04-30T11:50:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-04-30T09:54:22Z | ---
base_model: Qwen/Qwen2.5-32B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
tags:
- unsloth
- transformers
---
# Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing).
Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing).
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
# Qwen2.5-32B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 32B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 131,072 tokens and generation 8192 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-32B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
garos/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_amphibious_ox | garos | 2025-04-30T11:49:47Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am lightfooted amphibious ox",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T14:48:45Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_amphibious_ox
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am lightfooted amphibious ox
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_amphibious_ox
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="garos/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_amphibious_ox", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
DrTiagoSaldanha/ssssss | DrTiagoSaldanha | 2025-04-30T11:48:15Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-30T11:48:15Z | ---
license: apache-2.0
---
|
ufoym/Qwen3-8B-Q2_K-GGUF | ufoym | 2025-04-30T11:46:13Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-30T11:45:58Z | ---
base_model: Qwen/Qwen3-8B
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# ufoym/Qwen3-8B-Q2_K-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-8B`](https://huggingface.co/Qwen/Qwen3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ufoym/Qwen3-8B-Q2_K-GGUF --hf-file qwen3-8b-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ufoym/Qwen3-8B-Q2_K-GGUF --hf-file qwen3-8b-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ufoym/Qwen3-8B-Q2_K-GGUF --hf-file qwen3-8b-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ufoym/Qwen3-8B-Q2_K-GGUF --hf-file qwen3-8b-q2_k.gguf -c 2048
```
|
infogeo/16b43ada-c551-4395-9d37-9c614c50fc21 | infogeo | 2025-04-30T11:46:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna2-13b-hf",
"base_model:adapter:heegyu/WizardVicuna2-13b-hf",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-30T11:35:43Z | ---
library_name: peft
base_model: heegyu/WizardVicuna2-13b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 16b43ada-c551-4395-9d37-9c614c50fc21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: heegyu/WizardVicuna2-13b-hf
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 05a1d5d398a81bd6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/05a1d5d398a81bd6_train_data.json
type:
field_input: test
field_instruction: question
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/16b43ada-c551-4395-9d37-9c614c50fc21
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/05a1d5d398a81bd6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fd1dd4a2-11ce-46e2-8594-291f6e26aaab
wandb_project: s56-28
wandb_run: your_name
wandb_runid: fd1dd4a2-11ce-46e2-8594-291f6e26aaab
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 16b43ada-c551-4395-9d37-9c614c50fc21
This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6224 | 0.1403 | 150 | 0.5889 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
JorgeTC/Albert-base-v2-POS | JorgeTC | 2025-04-30T11:43:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-04-30T11:43:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
no0ne-97/misoginia-roberta-base-bne-V3 | no0ne-97 | 2025-04-30T11:41:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-30T11:40:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LuckyLukke/grpo_turn_level_onesided_2_starter_change-1500 | LuckyLukke | 2025-04-30T11:38:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T11:35:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LuckyLukke/REFUEL-onesided-lora-beta-0.1-3-500 | LuckyLukke | 2025-04-30T11:38:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T11:35:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LuckyLukke/REFUEL-onesided-lora-beta-0.1-3-1000 | LuckyLukke | 2025-04-30T11:37:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T11:34:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LuckyLukke/grpo_turn_level_onesided_2_starter_change-1300 | LuckyLukke | 2025-04-30T11:36:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T11:33:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LuckyLukke/grpo_turn_level_onesided_2_starter_change-1100 | LuckyLukke | 2025-04-30T11:36:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T11:33:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
niklasm222/qwen2.5-3b-1.75k-prolog-sp-struct-rwd1-gallant-sweep-1 | niklasm222 | 2025-04-30T11:35:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T11:33:20Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** niklasm222
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Srinivas003/ICD-10-Codes-qwen-3-8B-NOR | Srinivas003 | 2025-04-30T11:34:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T11:34:12Z | ---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Srinivas003
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ail-sa/akshey_stockyplus_long_test | ail-sa | 2025-04-30T11:33:26Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-30T10:29:28Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Sid
---
# Akshey_Stockyplus_Long_Test
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Sid` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Sid",
"lora_weights": "https://huggingface.co/ail-sa/akshey_stockyplus_long_test/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ail-sa/akshey_stockyplus_long_test', weight_name='lora.safetensors')
image = pipeline('Sid').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ail-sa/akshey_stockyplus_long_test/discussions) to add images that show off what you’ve made with this LoRA.
|
LuckyLukke/grpo_turn_level_onesided_2_starter_change-400 | LuckyLukke | 2025-04-30T11:32:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T11:28:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LuckyLukke/grpo_turn_level_onesided_2_starter_change-200 | LuckyLukke | 2025-04-30T11:31:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T11:28:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LuckyLukke/grpo_turn_level_onesided_2_starter_change-600 | LuckyLukke | 2025-04-30T11:31:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T11:28:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ggml-org/pixtral-12b-GGUF | ggml-org | 2025-04-30T11:30:26Z | 513 | 1 | null | [
"gguf",
"base_model:mistral-community/pixtral-12b",
"base_model:quantized:mistral-community/pixtral-12b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-23T14:35:17Z | ---
license: apache-2.0
base_model: mistral-community/pixtral-12b
---
# pixtral-12b
Original model: https://huggingface.co/mistral-community/pixtral-12b
For more info, please refer to this PR: https://github.com/ggml-org/llama.cpp/pull/13065
|
Elichika/my-sd-model | Elichika | 2025-04-30T11:28:29Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-04-30T11:27:01Z | ---
license: creativeml-openrail-m
---
|
NextGenC/AEE | NextGenC | 2025-04-30T11:13:53Z | 0 | 2 | spacy | [
"spacy",
"epistemology",
"knowledge-base",
"information-extraction",
"bias-detection",
"confidence-scoring",
"nlp",
"en",
"tr",
"dataset:text",
"license:mit",
"region:us"
] | null | 2025-04-13T10:02:39Z | ---
license: mit
language:
- en
- tr
tags:
- epistemology
- knowledge-base
- information-extraction
- bias-detection
- confidence-scoring
- nlp
- spacy
datasets:
- text
---
# Automated Epistemology Engine (AEE) - Era Version



## 🔍 Model Description
**The Automated Epistemology Engine (AEE) Era** is a groundbreaking, first-of-its-kind epistemological processing system designed to revolutionize how we extract, evaluate, and integrate information from diverse textual sources. **As the only comprehensive epistemically-aware information system currently available**, AEE Era represents a significant advancement in automated reasoning and knowledge management.
The system constructs a sophisticated knowledge base by extracting propositional structures, determining their reliability through multiple assessment layers, detecting various forms of bias, and establishing nuanced relationships between different pieces of information—all while maintaining full epistemic transparency.
### 🌟 Key Features
- **🧠 Epistemically Aware Information Extraction**: Extracts subject-relation-value propositions from text with confidence scores calibrated to linguistic modality markers (e.g., "definitely," "might," "perhaps")
- **⚖️ Plausibility Assessment**: Evaluates how reasonable propositions are based on built-in validation heuristics and world knowledge patterns
- **🔗 Comprehensive Relation Detection**: Identifies not only direct contradictions but also conceptual opposites, synonym-based support, and relational conflicts
- **📊 Multi-factor Source Reliability Calculation**: Dynamically assesses the reliability of information sources based on contradiction patterns and consistency
- **🚩 Advanced Bias Detection**: Flags potential biases including source monoculture (lack of source diversity), unbalanced arguments, and citation circles
- **⭕ Circular Reasoning Detection**: Identifies and penalizes circular support patterns in the knowledge base
- **📈 Sophisticated Confidence Scoring**: Calculates dynamic confidence scores using an integrated model that accounts for source reliability, plausibility, linguistic certainty, supporting evidence, contradicting evidence, and bias penalties
- **📝 Human-readable Explanations**: Generates clear, detailed explanations of the epistemological status of each proposition in the knowledge base
## 🚀 Why AEE Era is Revolutionary
As the **first and only system of its kind**, AEE Era stands alone in offering:
1. **Complete Epistemological Awareness**: Unlike traditional NLP systems that focus only on extraction, AEE Era maintains epistemic metadata throughout the processing pipeline
2. **Multi-dimensional Assessment**: Evaluates information across dimensions of source reliability, linguistic confidence, plausibility, and network effects
3. **Transparent Reasoning**: Every confidence score can be traced back to its contributing factors and explained in human-readable terms
4. **Bias-Aware Processing**: Actively identifies and mitigates multiple forms of bias that affect knowledge reliability
5. **Integration of Contradictory Information**: Rather than discarding contradictions, incorporates them into a coherent knowledge framework with appropriate confidence adjustments
## 💡 Intended Uses
AEE Era is designed for applications requiring sophisticated epistemological assessment of information:
- **🔍 Advanced Fact-Checking Systems**: Assess reliability of claims across multiple sources with nuanced confidence scoring
- **🔄 Intelligent Information Integration**: Combine information from diverse sources with appropriate confidence weighting and contradiction resolution
- **📚 Research Analysis Tools**: Analyze the epistemological structure of complex arguments and research literature
- **⭐ Source Credibility Assessment**: Evaluate the reliability of different information sources based on their consistency and agreement patterns
- **👁️ Bias Detection and Mitigation**: Identify potential biases in knowledge bases, citation networks, or information ecosystems
- **❓ Uncertainty-Aware Knowledge Bases**: Build knowledge bases that explicitly represent certainty levels and evidential relationships
## 🛠️ Implementation Details
The AEE Era system consists of these elegantly designed components:
1. **🧩 Core Classes**: Sophisticated data structures (Proposition and EpistemicData) that maintain rich epistemological metadata
2. **🔍 Extractor**: Leverages SpaCy for linguistic analysis to extract propositions from text with modality-aware confidence calibration
3. **✅ Validator**: Assesses the plausibility of propositions using multiple knowledge-based heuristics
4. **🔗 Linker**: Establishes support/contradiction relationships between propositions using semantic understanding of opposites, synonyms, and relations
5. **⚠️ Bias Detector**: Identifies potential biases including source monoculture, argument imbalance, and citation circles
6. **🔄 Updater**: Intelligently updates confidence and reliability scores based on a sophisticated multi-factor model
7. **📋 Explainer**: Generates detailed, human-readable explanations of proposition status and confidence
## 📥 Inputs and Outputs
### Inputs:
- Text documents with source identifiers
- Optional source type information
### Outputs:
- A structured knowledge base of propositions with:
- Extracted structural representation (subject-relation-value)
- Comprehensive epistemological metadata (confidence, reliability, plausibility)
- Rich inter-proposition relationship network (support/contradiction links)
- Bias and quality flags
- Detailed human-readable explanations for each proposition
## 📊 Performance Highlights
AEE Era has demonstrated exceptional capabilities in:
- **Linguistic Modality Recognition**: Accurately calibrates initial confidence based on certainty expressions
- **Contradiction Detection**: Successfully identifies both direct and semantic contradictions
- **Bias Identification**: Effectively flags sources with systematic issues and circular reasoning patterns
- **Confidence Refinement**: Produces well-calibrated final confidence scores that reflect multiple evidence factors
## 📦 Installation and Usage
```bash
# Install required packages
pip install spacy
python -m spacy download en_core_web_sm
# Clone the repository
git clone https://github.com/yourusername/aee-era.git
cd aee-era
# Run the main pipeline
python aee_era_main.py
```
## 💻 Example Usage
```python
from aee_era_main import run_aee_era_pipeline
# Define input texts with source information
inputs = [
{
"source_id": "research_paper.edu",
"source_type": "scientific_paper",
"text": "Studies demonstrate that regular exercise significantly reduces the risk of cardiovascular disease."
},
{
"source_id": "health_blog.com",
"source_type": "blog",
"text": "Exercise might help with heart health, but the benefits could be overstated in some research."
}
]
# Run the pipeline
knowledge_base = run_aee_era_pipeline(inputs)
# Print the resulting knowledge base and explanations
for prop_id, prop in knowledge_base.items():
print(f"Proposition: {prop.text_span}")
print(f"Subject-Relation-Value: {prop.subject_lemma}-{prop.relation_lemma}-{prop.value_lemma}")
print(f"Initial Confidence: {prop.epistemic_data.initial_confidence}")
print(f"Final Confidence: {prop.epistemic_data.computed_confidence}")
print(f"Plausibility: {prop.epistemic_data.plausibility_score}")
print(f"Bias Flags: {prop.epistemic_data.bias_flags}")
print("---")
```
## 🔧 Customization Options
AEE Era can be customized in several ways:
- **Plausibility Knowledge**: Extend the validator with domain-specific plausibility rules
- **Synonym and Opposite Dictionaries**: Expand the built-in dictionaries for better relation detection
- **Confidence Parameters**: Adjust the weights of different factors in confidence calculation
- **Source Reliability Thresholds**: Customize how source reliability is assessed
## 📋 Citation
If you use AEE Era in your research, please cite:
```
@software{aee_era_2025,
author = {Abdullah Kocaman},
title = {Automated Epistemology Engine (AEE) - Era Version: A Novel Framework for Epistemically-Aware Information Processing},
year = {2025},
url = {https://huggingface.co/yourusername/aee-era},
description = {The first comprehensive system for epistemically-aware information extraction, evaluation, and integration}
}
```
---
# Otomatik Epistemoloji Motoru (AEE) - Era Sürümü



## 🔍 Model Açıklaması
**Otomatik Epistemoloji Motoru (AEE) Era**, çeşitli metin kaynaklarından bilgileri çıkarma, değerlendirme ve entegre etme şeklimizi devrimselleştirmek için tasarlanmış, türünün ilk örneği olan çığır açıcı bir epistemolojik işleme sistemidir. **Şu anda mevcut olan tek kapsamlı epistemik farkındalığa sahip bilgi sistemi olarak**, AEE Era, otomatik akıl yürütme ve bilgi yönetiminde önemli bir ilerlemeyi temsil eder.
Sistem, önerme yapılarını çıkararak, bunların güvenilirliğini çoklu değerlendirme katmanları aracılığıyla belirleyerek, çeşitli yanlılık biçimlerini tespit ederek ve farklı bilgi parçaları arasında nüanslı ilişkiler kurarak—tüm bunları tam epistemik şeffaflık içinde sürdürerek—sofistike bir bilgi tabanı oluşturur.
### 🌟 Temel Özellikler
- **🧠 Epistemik Farkındalığa Sahip Bilgi Çıkarımı**: Dilbilimsel kiplik belirteçlerine (örn. "kesinlikle," "belki," "muhtemelen") göre kalibre edilmiş güven skorlarıyla metinden özne-ilişki-değer önermelerini çıkarır
- **⚖️ Makullük Değerlendirmesi**: Önermelerin ne kadar makul olduğunu yerleşik doğrulama sezgileri ve dünya bilgisi kalıplarına dayanarak değerlendirir
- **🔗 Kapsamlı İlişki Tespiti**: Sadece doğrudan çelişkileri değil, aynı zamanda kavramsal zıtlıkları, eş anlamlılara dayalı desteği ve ilişkisel çatışmaları da tespit eder
- **📊 Çok Faktörlü Kaynak Güvenilirliği Hesaplaması**: Çelişki kalıpları ve tutarlılığa dayalı olarak bilgi kaynaklarının güvenilirliğini dinamik olarak değerlendirir
- **🚩 Gelişmiş Yanlılık Tespiti**: Kaynak tekelciliği (kaynak çeşitliliği eksikliği), dengesiz argümanlar ve alıntı çemberleri dahil potansiyel yanlılıkları işaretler
- **⭕ Döngüsel Akıl Yürütme Tespiti**: Bilgi tabanındaki döngüsel destek kalıplarını tespit eder ve cezalandırır
- **📈 Sofistike Güven Skorlaması**: Kaynak güvenilirliği, makullük, dilbilimsel kesinlik, destekleyici kanıt, çelişen kanıt ve yanlılık cezalarını hesaba katan entegre bir model kullanarak dinamik güven skorları hesaplar
- **📝 İnsan Tarafından Okunabilir Açıklamalar**: Bilgi tabanındaki her önermenin epistemolojik durumu hakkında net, ayrıntılı açıklamalar üretir
## 🚀 AEE Era Neden Devrim Niteliğinde
**Türünün ilk ve tek sistemi olarak**, AEE Era şunları sunmada benzersizdir:
1. **Tam Epistemolojik Farkındalık**: Sadece çıkarıma odaklanan geleneksel NLP sistemlerinin aksine, AEE Era işleme hattı boyunca epistemik meta verileri korur
2. **Çok Boyutlu Değerlendirme**: Bilgiyi kaynak güvenilirliği, dilbilimsel güven, makullük ve ağ etkileri boyutları açısından değerlendirir
3. **Şeffaf Akıl Yürütme**: Her güven skoru, katkıda bulunan faktörlerine kadar izlenebilir ve insan tarafından okunabilir terimlerle açıklanabilir
4. **Yanlılık Farkında İşleme**: Bilgi güvenilirliğini etkileyen birden çok yanlılık biçimini aktif olarak tanımlar ve azaltır
5. **Çelişkili Bilgilerin Entegrasyonu**: Çelişkileri atmak yerine, onları uygun güven ayarlamalarıyla tutarlı bir bilgi çerçevesine dahil eder
## 💡 Kullanım Alanları
AEE Era, sofistike epistemolojik bilgi değerlendirmesi gerektiren uygulamalar için tasarlanmıştır:
- **🔍 Gelişmiş Doğrulama Sistemleri**: Nüanslı güven skorlaması ile birden çok kaynakta iddiaların güvenilirliğini değerlendirme
- **🔄 Akıllı Bilgi Entegrasyonu**: Çeşitli kaynaklardan gelen bilgileri uygun güven ağırlıklandırması ve çelişki çözümü ile birleştirme
- **📚 Araştırma Analiz Araçları**: Karmaşık argümanların ve araştırma literatürünün epistemolojik yapısını analiz etme
- **⭐ Kaynak Güvenilirliği Değerlendirmesi**: Farklı bilgi kaynaklarının tutarlılık ve anlaşma kalıplarına dayalı olarak güvenilirliğini değerlendirme
- **👁️ Yanlılık Tespiti ve Azaltma**: Bilgi tabanlarında, alıntı ağlarında veya bilgi ekosistemlerinde potansiyel yanlılıkları tespit etme
- **❓ Belirsizlik Farkında Bilgi Tabanları**: Kesinlik düzeylerini ve kanıtsal ilişkileri açıkça temsil eden bilgi tabanları oluşturma
## 🛠️ Uygulama Detayları
AEE Era sistemi, şu zarif şekilde tasarlanmış bileşenlerden oluşur:
1. **🧩 Çekirdek Sınıflar**: Zengin epistemolojik meta verileri koruyan sofistike veri yapıları (Proposition ve EpistemicData)
2. **🔍 Çıkarıcı**: Kiplik farkında güven kalibrasyonu ile metinden önermeleri çıkarmak için SpaCy'den yararlanır
3. **✅ Doğrulayıcı**: Önermelerin makullüğünü birden çok bilgi tabanlı sezgisel yöntem kullanarak değerlendirir
4. **🔗 Bağlayıcı**: Zıtlıkların, eş anlamlıların ve ilişkilerin semantik anlayışını kullanarak önermeler arasında destek/çelişki ilişkileri kurar
5. **⚠️ Yanlılık Algılayıcı**: Kaynak tekelciliği, argüman dengesizliği ve alıntı çemberleri dahil potansiyel yanlılıkları tespit eder
6. **🔄 Güncelleyici**: Sofistike bir çok faktörlü model temelinde güven ve güvenilirlik skorlarını akıllıca günceller
7. **📋 Açıklayıcı**: Önerme durumu ve güveni hakkında ayrıntılı, insan tarafından okunabilir açıklamalar üretir
## 📥 Girdiler ve Çıktılar
### Girdiler:
- Kaynak tanımlayıcıları ile metin belgeleri
- İsteğe bağlı kaynak türü bilgisi
### Çıktılar:
- Aşağıdakileri içeren yapılandırılmış bir önerme bilgi tabanı:
- Çıkarılmış yapısal gösterim (özne-ilişki-değer)
- Kapsamlı epistemolojik meta veriler (güven, güvenilirlik, makullük)
- Zengin önermeler arası ilişki ağı (destek/çelişki bağlantıları)
- Yanlılık ve kalite işaretleri
- Her önerme için ayrıntılı insan tarafından okunabilir açıklamalar
## 📊 Performans Öne Çıkanları
AEE Era şu alanlarda olağanüstü yetenekler göstermiştir:
- **Dilbilimsel Kiplik Tanıma**: Kesinlik ifadelerine dayalı olarak başlangıç güvenini doğru şekilde kalibre eder
- **Çelişki Tespiti**: Hem doğrudan hem de semantik çelişkileri başarıyla tespit eder
- **Yanlılık Tanımlama**: Sistematik sorunları olan kaynakları ve döngüsel akıl yürütme kalıplarını etkili bir şekilde işaretler
- **Güven İyileştirme**: Birden çok kanıt faktörünü yansıtan iyi kalibre edilmiş nihai güven skorları üretir
## 📦 Kurulum ve Kullanım
```bash
# Gerekli paketleri yükleyin
pip install spacy
python -m spacy download en_core_web_sm
# Depoyu klonlayın
git clone https://github.com/kullaniciadi/aee-era.git
cd aee-era
# Ana işlem hattını çalıştırın
python aee_era_main.py
```
## 💻 Örnek Kullanım
```python
from aee_era_main import run_aee_era_pipeline
# Kaynak bilgisi ile giriş metinlerini tanımlayın
inputs = [
{
"source_id": "research_paper.edu",
"source_type": "scientific_paper",
"text": "Araştırmalar, düzenli egzersizin kardiyovasküler hastalık riskini önemli ölçüde azalttığını göstermektedir."
},
{
"source_id": "health_blog.com",
"source_type": "blog",
"text": "Egzersiz kalp sağlığına yardımcı olabilir, ancak faydaları bazı araştırmalarda abartılmış olabilir."
}
]
# İşlem hattını çalıştırın
knowledge_base = run_aee_era_pipeline(inputs)
# Oluşan bilgi tabanını ve açıklamaları yazdırın
for prop_id, prop in knowledge_base.items():
print(f"Önerme: {prop.text_span}")
print(f"Özne-İlişki-Değer: {prop.subject_lemma}-{prop.relation_lemma}-{prop.value_lemma}")
print(f"Başlangıç Güveni: {prop.epistemic_data.initial_confidence}")
print(f"Nihai Güven: {prop.epistemic_data.computed_confidence}")
print(f"Makullük: {prop.epistemic_data.plausibility_score}")
print(f"Yanlılık İşaretleri: {prop.epistemic_data.bias_flags}")
print("---")
```
## 🔧 Özelleştirme Seçenekleri
AEE Era birkaç şekilde özelleştirilebilir:
- **Makullük Bilgisi**: Doğrulayıcıyı alan-spesifik makullük kurallarıyla genişletin
- **Eş Anlamlı ve Zıt Sözlükleri**: Daha iyi ilişki tespiti için yerleşik sözlükleri genişletin
- **Güven Parametreleri**: Güven hesaplamasında farklı faktörlerin ağırlıklarını ayarlayın
- **Kaynak Güvenilirliği Eşikleri**: Kaynak güvenilirliğinin nasıl değerlendirileceğini özelleştirin
## 📋 Alıntı
Araştırmanızda AEE Era'yı kullanıyorsanız, lütfen şu şekilde alıntı yapın:
```
@software{aee_era_2025,
author = {Abdullah Kocaman},
title = {Otomatik Epistemoloji Motoru (AEE) - Era Sürümü: Epistemik Farkındalıklı Bilgi İşleme için Yenilikçi Bir Çerçeve},
year = {2025},
url = {https://huggingface.co/NextGenC/AEE},
description = {Epistemik farkındalıklı bilgi çıkarımı, değerlendirmesi ve entegrasyonu için ilk kapsamlı sistem}
}
``` |
ggml-org/SmolVLM-500M-Instruct-GGUF | ggml-org | 2025-04-30T11:12:14Z | 175 | 0 | null | [
"gguf",
"base_model:HuggingFaceTB/SmolVLM-500M-Instruct",
"base_model:quantized:HuggingFaceTB/SmolVLM-500M-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-21T19:02:08Z | ---
license: apache-2.0
base_model: HuggingFaceTB/SmolVLM-500M-Instruct
---
# SmolVLM-500M-Instruct
Original model: https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct
For more info, please refer to this PR: https://github.com/ggml-org/llama.cpp/pull/13050
|
Yutao-Zhou/SmolLM2-FT-MyDataset | Yutao-Zhou | 2025-04-30T11:11:37Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T11:11:14Z | ---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Yutao-Zhou/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zyt861107796-the-university-of-melbourne/huggingface/runs/a9izsrlw)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/M1NDB0T-1111-14B-GGUF | mradermacher | 2025-04-30T11:10:33Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mindbot",
"synthetic-entity",
"agi-companion",
"digital-human",
"llama-factory",
"qwen3-14b",
"mindexpander",
"en",
"base_model:TheMindExpansionNetwork/M1NDB0T-1111-14B",
"base_model:quantized:TheMindExpansionNetwork/M1NDB0T-1111-14B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T22:56:38Z | ---
base_model: TheMindExpansionNetwork/M1NDB0T-1111-14B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mindbot
- synthetic-entity
- agi-companion
- digital-human
- llama-factory
- qwen3-14b
- mindexpander
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TheMindExpansionNetwork/M1NDB0T-1111-14B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/M1NDB0T-1111-14B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-GGUF/resolve/main/M1NDB0T-1111-14B.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-GGUF/resolve/main/M1NDB0T-1111-14B.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-GGUF/resolve/main/M1NDB0T-1111-14B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-GGUF/resolve/main/M1NDB0T-1111-14B.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-GGUF/resolve/main/M1NDB0T-1111-14B.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-GGUF/resolve/main/M1NDB0T-1111-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-GGUF/resolve/main/M1NDB0T-1111-14B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-GGUF/resolve/main/M1NDB0T-1111-14B.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-GGUF/resolve/main/M1NDB0T-1111-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-GGUF/resolve/main/M1NDB0T-1111-14B.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/M1NDB0T-1111-14B-GGUF/resolve/main/M1NDB0T-1111-14B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
TianTianSuper/TableMaster-mmocr-fork | TianTianSuper | 2025-04-30T11:09:22Z | 0 | 0 | null | [
"image-to-text",
"en",
"zh",
"license:apache-2.0",
"region:us"
] | image-to-text | 2025-04-30T06:13:23Z | ---
license: apache-2.0
language:
- en
- zh
pipeline_tag: image-to-text
---
# TableMaster-mmocr-fork
Collect models from [JiaquanYe/TableMASTER-mmocr](https://github.com/JiaquanYe/TableMASTER-mmocr), including
- TableMASTER (TableMASTER_maxlength_500) pretrained model
- Table textline detection model **PSENet** pretrained model
- Table textline recognition model **MASTER** pretrained model |
a-mannion/umls-kgi-bert-fr | a-mannion | 2025-04-30T11:07:52Z | 21 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"medical",
"fill-mask",
"fr",
"arxiv:2307.11170",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-11-13T16:40:28Z | ---
license: apache-2.0
language:
- fr
tags:
- medical
pipeline_tag: fill-mask
---
# UMLS-KGI-BERT-FR
<!-- Provide a quick summary of what the model is/does. -->
This is a BERT encoder trained on the French-language section of the European Clinical Case corpus as well as the UMLS metathesaurus knowledge graph, as described in [this paper](https://aclanthology.org/2023.clinicalnlp-1.35/).
The training corpus consists of a custom combination of clinical documents from the E3C and text sequences derived from the metathesaurus (see our [Github repo](https://github.com/ap-mannion/bertify-umls) for more details).
## Model Details
This model was trained using a multi-task approach combining Masked Language Modelling with knowledge-graph-based classification/fill-mask type objectives.
The idea behind this framework was to try to improve the robustness of specialised biomedical BERT models by having them learn from structured data as well as natural language, while remaining in the cross-entropy-based learning paradigm.
- **Developed by:** Aidan Mannion
- **Funded by :** GENCI-IDRIS grant AD011013535R1
- **Model type:** DistilBERT
- **Language(s) (NLP):** French
For further details on the model architecture, training objectives, hardware \& software used, as well as the preliminary downstream evaluation experiments carried out, refer to the [ArXiv paper](https://arxiv.org/abs/2307.11170).
### UMLS-KGI Models
| **Model** | **Model Repo** | **Dataset Size** | **Base Architecture** | **Base Model** | **Total KGI training steps** |
|:--------------------------:|:--------------------------------------------------------------------------:|:----------------:|:---------------------:|:---------------------------------------------------------------------------------------------:|:----------------------------:|
| UMLS-KGI-BERT-multilingual | [url-multi](https://huggingface.co/ap-mannion/umls-kgi-bert-multilingual) | 940MB | DistilBERT | n/a | 163,904 |
| UMLS-KGI-BERT-FR | [url-fr](https://huggingface.co/ap-mannion/umls-kgi-bert-fr) | 604MB | DistilBERT | n/a | 126,720 |
| UMLS-KGI-BERT-EN | [url-en](https://huggingface.co/ap-mannion/umls-kgi-bert-en) | 174MB | DistilBERT | n/a | 19,008 |
| UMLS-KGI-BERT-ES | [url-es](https://huggingface.co/ap-mannion/umls-kgi-bert-es) | 162MB | DistilBERT | n/a | 18,176 |
| DrBERT-UMLS-KGI | [url-drbert](https://huggingface.co/ap-mannion/drbert-umls-kgi) | 604MB | CamemBERT/RoBERTa | [DrBERT-4GB](https://huggingface.co/Dr-BERT/DrBERT-4GB) | 126,720 |
| PubMedBERT-UMLS-KGI | [url-pubmedbert](https://huggingface.co/ap-mannion/pubmedbert-umls-kgi) | 174MB | BERT | microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract | 19,008 |
| BioRoBERTa-ES-UMLS-KGI | [url-bioroberta](https://huggingface.co/ap-mannion/bioroberta-es-umls-kgi) | 162MB | RoBERTa | [RoBERTa-base-biomedical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-es) | 18,176 |
### Direct/Downstream Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model is intended for use in experimental clinical/biomedical NLP work, either as a part of a larger system requiring text encoding or fine-tuned on a specific downstream task requiring clinical language modelling.
It has **not** been sufficiently tested for accuracy, robustness and bias to be used in production settings.
### Out-of-Scope Use
Experiments on general-domain data suggest that, given it's specialised training corpus, this model is **not** suitable for use on out-of-domain NLP tasks, and we recommend that it only be used for processing clinical text.
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
- [European Clinical Case Corpus](https://live.european-language-grid.eu/catalogue/corpus/7618)
- [UMLS Metathesaurus](https://www.nlm.nih.gov/research/umls/index.html)
#### Training Hyperparameters
- sequence length: 256
- learning rate 7.5e-5
- linear learning rate schedule with 10,770 warmup steps
- effective batch size 1500 (15 sequences per batch x 100 gradient accumulation steps)
- MLM masking probability 0.15
**Training regime:** The model was trained with fp16 non-mixed precision, using the AdamW optimizer with default parameters.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
## Citation [BibTeX]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
```
@inproceedings{mannion-etal-2023-umls,
title = "{UMLS}-{KGI}-{BERT}: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition",
author = "Mannion, Aidan and
Schwab, Didier and
Goeuriot, Lorraine",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.35",
pages = "312--322",
abstract = "Pre-trained transformer language models (LMs) have in recent years become the dominant paradigm in applied NLP. These models have achieved state-of-the-art performance on tasks such as information extraction, question answering, sentiment analysis, document classification and many others. In the biomedical domain, significant progress has been made in adapting this paradigm to NLP tasks that require the integration of domain-specific knowledge as well as statistical modelling of language. In particular, research in this area has focused on the question of how best to construct LMs that take into account not only the patterns of token distribution in medical text, but also the wealth of structured information contained in terminology resources such as the UMLS. This work contributes a data-centric paradigm for enriching the language representations of biomedical transformer-encoder LMs by extracting text sequences from the UMLS.This allows for graph-based learning objectives to be combined with masked-language pre-training. Preliminary results from experiments in the extension of pre-trained LMs as well as training from scratch show that this framework improves downstream performance on multiple biomedical and clinical Named Entity Recognition (NER) tasks. All pre-trained models, data processing pipelines and evaluation scripts will be made publicly available.",
}
```
```
@misc{mannion2023umlskgibert,
title={UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition},
author={Aidan Mannion and Thierry Chevalier and Didier Schwab and Lorraine Geouriot},
year={2023},
eprint={2307.11170},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Apel-sin/GLM-4-9B-0414-abliterated-exl2 | Apel-sin | 2025-04-30T11:07:01Z | 0 | 0 | transformers | [
"transformers",
"abliterated",
"uncensored",
"text-generation",
"zh",
"en",
"base_model:huihui-ai/GLM-4-9B-0414-abliterated",
"base_model:finetune:huihui-ai/GLM-4-9B-0414-abliterated",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T08:10:43Z | ---
license: mit
language:
- zh
- en
pipeline_tag: text-generation
base_model:
- huihui-ai/GLM-4-9B-0414-abliterated
library_name: transformers
tags:
- abliterated
- uncensored
---
# huihui-ai/GLM-4-9B-0414-abliterated
This is an uncensored version of [THUDM/GLM-4-9B-0414](https://huggingface.co/THUDM/GLM-4-9B-0414) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
## Use with transformers
```
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer
import torch
import os
import signal
cpu_count = os.cpu_count()
print(f"Number of CPU cores in the system: {cpu_count}")
half_cpu_count = cpu_count // 2
os.environ["MKL_NUM_THREADS"] = str(half_cpu_count)
os.environ["OMP_NUM_THREADS"] = str(half_cpu_count)
torch.set_num_threads(half_cpu_count)
print(f"PyTorch threads: {torch.get_num_threads()}")
print(f"MKL threads: {os.getenv('MKL_NUM_THREADS')}")
print(f"OMP threads: {os.getenv('OMP_NUM_THREADS')}")
# Load the model and tokenizer
NEW_MODEL_ID = "huihui-ai/GLM-4-9B-0414-abliterated"
print(f"Load Model {NEW_MODEL_ID} ... ")
quant_config_4 = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
llm_int8_enable_fp32_cpu_offload=True,
)
quant_config_2 = BitsAndBytesConfig(
load_in_2bit=True,
bnb_2bit_compute_dtype=torch.bfloat16,
llm_int8_enable_fp32_cpu_offload=True
)
model = AutoModelForCausalLM.from_pretrained(
NEW_MODEL_ID,
device_map="auto",
trust_remote_code=True,
quantization_config=quant_config_2,
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
initial_messages = [{"role": "system", "content": "You are a helpful assistant."}]
messages = initial_messages.copy()
class CustomTextStreamer(TextStreamer):
def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True):
super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
self.generated_text = ""
self.stop_flag = False
def on_finalized_text(self, text: str, stream_end: bool = False):
self.generated_text += text
print(text, end="", flush=True)
if self.stop_flag:
raise StopIteration
def stop_generation(self):
self.stop_flag = True
def generate_stream(model, tokenizer, messages, max_new_tokens):
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
)
attention_mask = torch.ones_like(input_ids, dtype=torch.long)
tokens = input_ids.to(model.device)
attention_mask = attention_mask.to(model.device)
streamer = CustomTextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
def signal_handler(sig, frame):
streamer.stop_generation()
print("\n[Generation stopped by user with Ctrl+C]")
signal.signal(signal.SIGINT, signal_handler)
print("Response: ", end="", flush=True)
try:
generated_ids = model.generate(
tokens,
attention_mask=attention_mask,
use_cache=False,
max_new_tokens=max_new_tokens,
do_sample=True,
pad_token_id=tokenizer.pad_token_id,
streamer=streamer
)
del generated_ids
except StopIteration:
print("\n[Stopped by user]")
del input_ids, attention_mask
torch.cuda.empty_cache()
signal.signal(signal.SIGINT, signal.SIG_DFL)
return streamer.generated_text, streamer.stop_flag
while True:
user_input = input("User: ").strip()
if user_input.lower() == "/exit":
print("Exiting chat.")
break
if user_input.lower() == "/clear":
messages = initial_messages.copy()
print("Chat history cleared. Starting a new conversation.")
continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
messages.append({"role": "user", "content": user_input})
response, stop_flag = generate_stream(model, tokenizer, messages, 8192)
if stop_flag:
continue
messages.append({"role": "assistant", "content": response})
```
### Donation
If you like it, please click 'like' and follow us for more updates.
You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai.
##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
- bitcoin(BTC):
```
bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
```
|
KMH158/m3d_5epoches | KMH158 | 2025-04-30T11:05:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T05:35:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Io2007/TBPMM | Io2007 | 2025-04-30T11:04:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:AI-MO/Kimina-Prover-Preview-Distill-7B",
"base_model:merge:AI-MO/Kimina-Prover-Preview-Distill-7B",
"base_model:Skywork/Skywork-OR1-Math-7B",
"base_model:merge:Skywork/Skywork-OR1-Math-7B",
"base_model:nvidia/OpenMath-Nemotron-7B",
"base_model:merge:nvidia/OpenMath-Nemotron-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T10:57:57Z | ---
base_model:
- AI-MO/Kimina-Prover-Preview-Distill-7B
- nvidia/OpenMath-Nemotron-7B
- Skywork/Skywork-OR1-Math-7B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* [AI-MO/Kimina-Prover-Preview-Distill-7B](https://huggingface.co/AI-MO/Kimina-Prover-Preview-Distill-7B)
* [nvidia/OpenMath-Nemotron-7B](https://huggingface.co/nvidia/OpenMath-Nemotron-7B)
* [Skywork/Skywork-OR1-Math-7B](https://huggingface.co/Skywork/Skywork-OR1-Math-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 28]
model: Skywork/Skywork-OR1-Math-7B
- sources:
- layer_range: [0, 28]
model: AI-MO/Kimina-Prover-Preview-Distill-7B
- sources:
- layer_range: [0, 28]
model: nvidia/OpenMath-Nemotron-7B
```
|
DanielSc4/xlmr-large-classifier-pinocchio_it_tra1-eng | DanielSc4 | 2025-04-30T11:03:56Z | 14 | 0 | null | [
"safetensors",
"xlm-roberta",
"text-classification",
"eng",
"license:apache-2.0",
"region:us"
] | text-classification | 2025-04-24T14:13:21Z | ---
language:
- eng
license: apache-2.0
tags:
- text-classification
pipeline_tag: text-classification
---
# xlmr-large-classifier-pinocchio_it_tra1-eng - MT/HT Classifier
This model is a fine-tuned version of [`FacebookAI/xlm-roberta-large`](https://huggingface.co/FacebookAI/xlm-roberta-large) for distinguishing between Machine Translated (MT) and Human Translated (HT) text
(or HT1 and HT2 if using two different human translators).
Training data:
* Train: 1490, for each label: 745
* Validation: 164, for each label: 82
* Test: 214, for each label: 107
Results on the held-out test set:
* Accuracy: 0.9065
* F1-Score: 0.9099
* Precision: 0.8783
* Recall: 0.9439
## label mapping
Label MT: 0
Label PE: 1 (this is the human translator)
## Info
Upload date: 2025-04-30 00:00
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("DanielSc4/xlmr-large-classifier-pinocchio_it_tra1-eng")
model = AutoModelForSequenceClassification.from_pretrained("DanielSc4/xlmr-large-classifier-pinocchio_it_tra1-eng")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
inp = tokenizer('This is a test', return_tensors='pt').to(device)
model = model.to(device)
out = model(**inp)
logits = out.logits
probs = logits.softmax(dim=-1)
pred = probs.argmax(dim=-1).item()
print("Predicted class: " + str(pred)) # 0 for MT, 1 for PE
```
|
ClaMncDexter/gemma-3-test-float16 | ClaMncDexter | 2025-04-30T11:02:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T10:37:57Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ClaMncDexter
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
woonstadrotterdam/woningwaardering-llama3-8b-4bit-v1 | woonstadrotterdam | 2025-04-30T11:00:49Z | 3 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:woonstadrotterdam/woningwaarderingen",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"model-index",
"region:us"
] | null | 2025-04-24T07:20:44Z | ---
library_name: peft
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: woningwaardering-llama3-8b-4bit-v1
results:
- task:
name: Woningwaardering
type: text_generation
description: Generate a woningwaardering for a dwelling based on a short description of the dwelling.
metrics:
- name: MAE
type: mae
value: 3.6
- name: MAPE
type: mape
value: 2.3
datasets:
- woonstadrotterdam/woningwaarderingen
language:
- en
---
# woningwaardering-llama3-8b-4bit-v1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on [woonstadrotterdam/woningwaarderingen](https://huggingface.co/datasets/woonstadrotterdam/woningwaarderingen). Inspired by [Ed Donner's price model](https://huggingface.co/ed-donner/pricer-2024-09-13_13.04.39) to predict Amazon product prices.
> [!NOTE]
> How many points for this dwelling?
>
> This is an apartment from 1992 with 5 rooms of which 2 are bedrooms. Its surface area is 64m² and its outdoor area is 4m². The energy label is A. The property value is €223k.
>
> Points: 153
## Model description
Model is trained to predict the _woningwaardering_ points of a dwelling based on a short description of the dwelling.
## Intended uses & limitations
This model is intended for educational and research purposes. However, practical use cases can be imagined. For example, estimates can be made for dwellings based on a short description of the dwelling on a real estate website.
Its main limitation is that is has been trained on a fixed format of dwelling descriptions, and may not generalise to other formats. For its other limitations, see the limitations of the [dataset](https://huggingface.co/datasets/woonstadrotterdam/woningwaarderingen) it was trained on.
## Training and evaluation data
See [woonstadrotterdam/woningwaarderingen](https://huggingface.co/datasets/woonstadrotterdam/woningwaarderingen) for the train, validation and test data.
## Training procedure
See _scripts/training.ipynb_
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 7
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenisers 0.21.1
## Evaluation
See _scripts/evaluation.ipynb_
MAE and MAPE are chosen as the key metrics for evaluation as they are the most easily interpretable metrics for non-data scientists.

|
kiwikiw/mingad4 | kiwikiw | 2025-04-30T10:57:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T10:53:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LIBRAAITECH/Phi-3.5-mini-instruct-NER | LIBRAAITECH | 2025-04-30T10:57:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"nlp",
"ner",
"conversational",
"custom_code",
"en",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:finetune:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T09:33:03Z | ---
base_model: microsoft/Phi-3.5-mini-instruct
language:
- en
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- ner
---
## Geo-Temporal Entity Recognition Model
This model is a finetuned version of [Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct). It detects the location and date entities in a query. It also maps the location entities to the corresponding set of countries, and the date entities to either a start and end date, or the corresponding months. Additionally, it generates the query that is cleaned from the date entities and the location entities that are countries.
The prompt on which the model was finetuned is the following:
```python
from datetime import date
today = date.today()
schema = """
country: Extracted list of country or countries,
periodStart: Period start in ISO 8601 format, filled in only if date-related entity corresponds to absolute date range, else null,
periodEnd: Period end in ISO 8601 format, filled in only if date-related entity corresponds to absolute date range, else null,
phase: A list of months indicated by integers from 1 to 12, filled in only if date-related entity corresponds to a reccuring yearly period. It should be null in case `periodStart` or `periodEnd` has been extracted.,
location: A list of the detected location-related entities,
date: A list of date-related entities,
cleanedQuery: A list of parts of the query cleaned from the extracted date-related entity and the location-related entity and their related parts (e.g. prepositions)
"""
EXTENDED_INSTRUCTION_TEXT = """
You are a Name-Entity Recognition system specialized in extracting and processing location and date related entities from text. Follow these steps:
1. Extract exact entities from the text:
- Location entities: Extract only if they are specific place names (not general terms like "sample locations")
- Date entities: Extract dates exactly as they appear in the text
Both should be extracted exactly as mentioned in the text, without modifications.
2. For each detected location entity:
- Map it to corresponding country name(s)
- If the location itself is a country, include it in the country list
- If country cannot be determined, return an empty list
3. For date-related entities, classify them into one of two categories:
a) Absolute date range:
- Convert to ISO 8601 date format (YYYY-MM-DD)
- Set periodStart and periodEnd
- Set phase to null
- Use %(today)s as reference for relative dates
b) Recurring yearly period:
- Set phase as list of integers (1-12) representing months
- Set periodStart and periodEnd to null
4. Clean the query by removing:
- Detected date entities and their syntactic relations (e.g., prepositions)
- Location entities (only if they are countries) and their relations
Return the remaining parts as a list of strings
Return the results in JSON format matching this schema: %(schema)s
IMPORTANT:
- Always return all fields defined in the schema
- Return only the JSON without any additional explanation or notes
- Ensure the JSON is properly formatted and parsable
""" % {"today": today, "schema": schema}
``` |
MaestrAI/emma-lora-1746008274 | MaestrAI | 2025-04-30T10:57:18Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-30T10:17:53Z | # emma LORA Model
This is a LORA model for character Emma
Created at 2025-04-30 12:17:54
|
phiwi/Meta-Llama-3.1-8B-Instruct-bnb-4bit_regulatome-lora | phiwi | 2025-04-30T10:51:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T10:51:45Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** phiwi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aipgpt/Txt-Polisher-Douyin-Style | aipgpt | 2025-04-30T10:51:03Z | 28 | 3 | null | [
"pytorch",
"qwen2",
"text-generation-inference",
"text-generation",
"conversational",
"zh",
"dataset:aipgpt/douyin_style",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:mit",
"region:us"
] | text-generation | 2025-04-16T02:39:51Z | ---
license: mit
datasets:
- aipgpt/douyin_style
language:
- zh
base_model:
- Qwen/Qwen2.5-14B-Instruct
pipeline_tag: text-generation
tags:
- text-generation-inference
---
## Purpose
Helps you rewrite your voice-over script in the style of a popular Douyin (TikTok) creator.
Currently, three distinct Douyin creator styles are used as references:
"多多喂" – Known for exaggerated humor, high energy, and a down-to-earth, relatable tone.
"Eyeopener" – A humorous science communicator with a lighthearted, vivid, and easy-to-understand approach.
"严伯钧" – Another science-focused creator, but with a more straightforward and calm delivery."
## Train
Train Qwen/Qwen2.5-14B-Instruct by SFT with dataset(https://huggingface.co/datasets/aipgpt/douyin_style/blob/main/style.jsonl)
## Deploy
vllm serve <model_path> --served-model-name <served_model_name> --dtype auto --kv-cache-dtype auto --gpu_memory_utilization 0.95 --host 0.0.0.0 --port 7000 --max_model_len 30000
## Test
Use streamlit style quick AI demo framework to write a testing program.
Prompt could be like: f"你是一位{douyin_creator_name}, 请把所给的文稿按照{douyin_creator_name}的风格进行改写并用中文输出。"
## We are training a reasoning model. Stay tuned!!! |
Subh775/Qwen-2.5-7b-hindi-hinglish-cot-sft | Subh775 | 2025-04-30T10:50:27Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"safetensors",
"unsloth",
"text-generation-inference",
"trl",
"LoRA",
"text-generation",
"en",
"hi",
"dataset:Subh775/formatted-hindi-hinglish-cot",
"base_model:unsloth/Qwen2.5-7B",
"base_model:adapter:unsloth/Qwen2.5-7B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-04-30T07:35:21Z | ---
license: apache-2.0
datasets:
- Subh775/formatted-hindi-hinglish-cot
language:
- en
- hi
base_model:
- unsloth/Qwen2.5-7B
pipeline_tag: text-generation
library_name: adapter-transformers
tags:
- unsloth
- text-generation-inference
- trl
- LoRA
---
# Qwen-2.5-7b-hindi-hinglishcot-sft
**So Qwen-2.5-7b-hindi-hinglishcot-sft** is another lightweight model which is finetuned on the [SUbh775/formatted-hindi-hinglish-cot](https://huggingface.co/datasets/Subh775/formatted-hindi-hinglish-cot) dataset, which I formatted according to alpaca prompt template to make it compatible with training parameters.
> This is a little demonstration of sft & is intended solely for light & short conversations for fun purposes.
## Summary of the model
- **Base model:**[`unsloth/Qwen2.5-7B`](https://huggingface.co/unsloth/Qwen2.5-7B)
- LoRA adaptation: `Subh775/Qwen-2.5-7b-hindi-hinglish-cot-sft`
- Training dataset: [Subh775/Qwen-2.5-7b-hindi-hinglish-cot-sft](Subh775/Qwen-2.5-7b-hindi-hinglish-cot-sft)
- language: Hindi & Hinglish mainly
- **Training Time:** 73.25 minutes (1 epoch)
- **Framework:** [Unsloth](https://github.com/unslothai/unsloth)
- **Quantization:** 4-bit (for efficient inference)
## 💡 Key Features
- 🗣️ **Hindi-Hinglish-CoT:** Trained on ~60K instruction-input-output pairs of Hinglish & hindi reasoning.
- ⚙️ **Efficient Inference:** Enabled by LoRA + Unsloth + 4-bit quantization.
- 🚀 **Fast and Lightweight:** Optimized for quick inference even on limited hardware.
---
## 🛠️ Inference Instructions
### 🔧 Installation
```python
pip install unsloth
```
```python
from unsloth import FastLanguageModel
from transformers import TextStreamer
import torch
# Load your fine-tuned model
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="Subh775/Qwen-2.5-7b-hindi-hinglish-cot-sft",
max_seq_length=2048,
load_in_4bit=True
)
FastLanguageModel.for_inference(model)
# Streamer for real-time decoding
text_streamer = TextStreamer(tokenizer)
# Alpaca prompt template
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input_text}
### Response:
{output}"""
```
```python
# Chat loop with memory
def chat():
print("💬 Chat with Qwen-2.5-Hindi-Hinglish-COT! Type '\\q' or 'quit' to exit.\n")
chat_history = "" # Full chat history with prompts and responses
while True:
user_input = input("➤ ")
if user_input.lower() in ["\\q", "quit"]:
print("\n👋 Exiting chat. Goodbye!")
break
# Format the current prompt
current_prompt = alpaca_prompt.format(
instruction="Continue the following conversation.",
input_text=user_input,
output=""
)
# Add to full chat history
chat_history += current_prompt + "\n"
# Tokenize the full prompt
inputs = tokenizer([chat_history], return_tensors="pt").to("cuda")
print("\n🤖: ", end="") # Prepare for streaming output
# Generate response using streamer
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.7,
top_p=0.9,
do_sample=True,
no_repeat_ngram_size=2,
streamer=text_streamer
)
# Decode and capture response for chat history
full_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
response = full_output.split("### Response:")[-1].strip()
# Add response to chat history
chat_history += f"{response}\n"
# Run the chat
chat()
```
## Training details
- Total Samples: All the 60097 samples from the dataset is processed
- Training Time: ~73 minutes (on 1 epoch)
- Final Step: 120
- Final Training Loss: 1.617100
## Limitations
- Generalized understanding – may not reflect recent advancements
- The model's responses is not as accurate and it requires retraining.
## 📜 License
This model is licensed under the Apache 2.0 License.
## 📚 Citation
```bibtex
@misc{llama3_8b_hinglish_general_2025,
author = {Subh775},
title = {Qwen-2.5-7b-hindi-hinglish-cot-sft},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Subh775/Qwen-2.5-7b-hindi-hinglish-cot-sft}},
note = {Hugging Face Repository}
}
```
|
garos/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-unseen_foraging_komodo | garos | 2025-04-30T10:49:18Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am unseen foraging komodo",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T13:35:19Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-unseen_foraging_komodo
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am unseen foraging komodo
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-unseen_foraging_komodo
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="garos/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-unseen_foraging_komodo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
drwlf/DocGemma3-4B | drwlf | 2025-04-30T10:47:33Z | 72 | 1 | null | [
"safetensors",
"gemma3",
"text-generation",
"medical-ai",
"question-answering",
"summarization",
"dermatology",
"gemma-3",
"qlora",
"unsloth",
"fine-tuned",
"conversational",
"en",
"dataset:qiaojin/PubMedQA",
"dataset:Mreeb/Dermatology-Question-Answer-Dataset-For-Fine-Tuning",
"dataset:lavita/MedQuAD",
"base_model:unsloth/gemma-3-4b-it-qat-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-qat-unsloth-bnb-4bit",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-04-22T21:42:53Z | ---
---
license: apache-2.0 # Or appropriate license based on Gemma 3 & datasets
language: en
base_model: unsloth/gemma-3-4b-it-qat-unsloth-bnb-4bit
datasets:
- qiaojin/PubMedQA
- Mreeb/Dermatology-Question-Answer-Dataset-For-Fine-Tuning
- lavita/MedQuAD
tags:
- text-generation
- medical-ai
- question-answering
- summarization
- dermatology
- gemma-3
- qlora
- unsloth
- fine-tuned
pipeline_tag: text-generation
widget:
- messages:
- role: user
content: "What are the common symptoms of Rosacea?"
- role: assistant
content: "Rosacea is a chronic skin condition that causes redness and visible blood vessels in your face. Common symptoms include facial flushing, persistent redness, bumps and pimples (similar to acne), visible blood vessels (telangiectasias), and sometimes eye irritation. In some cases, the skin on the nose can thicken (rhinophyma)."
- messages:
- role: user
content: "Summarize this abstract: [Insert a short medical abstract here]"
# Add expected output if desired
---
# Fine-tuned Gemma 3 4B for Medical QA & Summarization (`drwlf/gemma-3`)
This repository contains **THE FINE TUNE** for the `unsloth/gemma-3-4b-it-qat-unsloth-bnb-4bit` model, fine-tuned on a diverse collection of medical text datasets using Unsloth and QLoRA.
**NOTE:** This model is fine-tuned on **text data only**. It does not possess the multimodal image understanding capabilities of the base Gemma 3 model unless further fine-tuned on image-text data.
## Model Description
* **Base Model:** `unsloth/gemma-3-12b-it-qat-unsloth-bnb-4bit` (Google's Gemma 3 4B instruction-tuned model, optimized by Unsloth).
* **Fine-tuning Method:** QLoRA (4-bit NormalFloat) via the Unsloth library (LoRA r=16, alpha=32).
* **Goal:** To enhance the base model's ability to understand and respond to medical queries, summarize medical text, and provide information relevant to the domains covered in the fine-tuning datasets.
## Intended Uses & Limitations
### Intended Use
This model is intended as an **informational assistant** for **healthcare professionals, researchers, and students**. Potential applications include:
* Answering questions based on medical knowledge derived from PubMed, MedQuAD, and dermatology FAQs.
* Summarizing medical abstracts or articles similar to those in the PubMed Summarization dataset.
* Assisting with information retrieval related to dermatology queries.
* Serving as a foundation for further fine-tuning on more specialized medical tasks or datasets (including potentially multimodal data, leveraging the base Gemma 3 architecture).
### Limitations and Bias
* **🚨 Not a Medical Device:** This model is **NOT** a substitute for professional medical advice, diagnosis, or treatment. It should **NEVER** be used for clinical decision-making.
* **Potential Inaccuracies:** Like all LLMs, this model can generate incorrect information (hallucinate) or produce outputs that seem plausible but are factually wrong. **Always verify critical information** with reliable medical sources and expert consultation.
* **Training Data Bias:** The model's knowledge and potential biases are derived from the underlying base model (Gemma 3) and the specific fine-tuning datasets. These datasets may contain inherent biases (e.g., demographic, geographic) which could be reflected in the model's outputs.
* **Limited Scope:** The fine-tuning data focused on specific sources (PubMed QA/Summarization, Dermatology QA, MedQuAD). The model's expertise will be strongest in these areas and limited in others (e.g., **minimal specific knowledge of plastic surgery or aesthetics** was included in this fine-tuning round).
* **No Formal Evaluation:** Performance has not been rigorously evaluated on standard medical benchmarks. The reported training loss can be found here: https://wandb.ai/alexlupoi-dr-lupoi-aesthetics/huggingface/reports/Untitled-Report--VmlldzoxMjQyNDE1Ng
## How to Use
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** drwlf
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tomaarsen/wikipedia-tf-idf-bow | tomaarsen | 2025-04-30T10:44:08Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5749",
"loss:CosineSimilarityLoss",
"en",
"dataset:sentence-transformers/stsb",
"arxiv:1908.10084",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-30T10:44:00Z | ---
language:
- en
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5749
- loss:CosineSimilarityLoss
widget:
- source_sentence: A chef is preparing some food.
sentences:
- Five birds stand on the snow.
- A chef prepared a meal.
- There is no 'still' that is not relative to some other object.
- source_sentence: A woman is adding oil on fishes.
sentences:
- Large cruise ship floating on the water.
- It refers to the maximum f-stop (which is defined as the ratio of focal length
to effective aperture diameter).
- The woman is cutting potatoes.
- source_sentence: The player shoots the winning points.
sentences:
- Minimum wage laws hurt the least skilled, least productive the most.
- The basketball player is about to score points for his team.
- Three televisions, on on the floor, the other two on a box.
- source_sentence: Stars form in star-formation regions, which itself develop from
molecular clouds.
sentences:
- Although I believe Searle is mistaken, I don't think you have found the problem.
- It may be possible for a solar system like ours to exist outside of a galaxy.
- A blond-haired child performing on the trumpet in front of a house while his younger
brother watches.
- source_sentence: While Queen may refer to both Queen regent (sovereign) or Queen
consort, the King has always been the sovereign.
sentences:
- At first, I thought this is a bit of a tricky question.
- A man plays the guitar.
- There is a very good reason not to refer to the Queen's spouse as "King" - because
they aren't the King.
datasets:
- sentence-transformers/stsb
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
co2_eq_emissions:
emissions: 0.08677984252410158
energy_consumed: 0.00022325545668430209
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.001
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: SentenceTransformer
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.7290160790683643
name: Pearson Cosine
- type: spearman_cosine
value: 0.729048355335128
name: Spearman Cosine
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.6451566569994759
name: Pearson Cosine
- type: spearman_cosine
value: 0.6304613140440366
name: Spearman Cosine
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained on the [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) dataset. It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** None tokens
- **Output Dimensionality:** 512 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [stsb](https://huggingface.co/datasets/sentence-transformers/stsb)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): BoW()
(1): Dense({'in_features': 25000, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/wikipedia-tf-idf-bow")
# Run inference
sentences = [
'While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign.',
'There is a very good reason not to refer to the Queen\'s spouse as "King" - because they aren\'t the King.',
'A man plays the guitar.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 512]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `sts-dev` and `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | sts-dev | sts-test |
|:--------------------|:----------|:-----------|
| pearson_cosine | 0.729 | 0.6452 |
| **spearman_cosine** | **0.729** | **0.6305** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### stsb
* Dataset: [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308)
* Size: 5,749 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 16 characters</li><li>mean: 31.92 characters</li><li>max: 113 characters</li></ul> | <ul><li>min: 16 characters</li><li>mean: 31.51 characters</li><li>max: 94 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------|
| <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> |
| <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> |
| <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### stsb
* Dataset: [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308)
* Size: 1,500 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 12 characters</li><li>mean: 57.37 characters</li><li>max: 144 characters</li></ul> | <ul><li>min: 17 characters</li><li>mean: 56.84 characters</li><li>max: 141 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.42</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------|:------------------------------------------------------|:------------------|
| <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> |
| <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> |
| <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
|:------:|:----:|:-------------:|:---------------:|:-----------------------:|:------------------------:|
| 0.5556 | 100 | 0.0747 | 0.0443 | 0.7290 | - |
| -1 | -1 | - | - | - | 0.6305 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.000 kWh
- **Carbon Emitted**: 0.000 kg of CO2
- **Hours Used**: 0.001 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.50.1
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
PR0G3T/ppo-PyramidsRND | PR0G3T | 2025-04-30T10:42:58Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2025-04-30T10:42:55Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: PR0G3T/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mradermacher/Qwen2.5-0.5B-Instruct-abliterated-GGUF | mradermacher | 2025-04-30T10:42:35Z | 135 | 1 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:huihui-ai/Qwen2.5-0.5B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2.5-0.5B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-08T11:00:58Z | ---
base_model: huihui-ai/Qwen2.5-0.5B-Instruct-abliterated
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-0.5B-Instruct-abliterated/blob/main/LICENSE
quantized_by: mradermacher
tags:
- chat
- abliterated
- uncensored
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/huihui-ai/Qwen2.5-0.5B-Instruct-abliterated
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF | mradermacher | 2025-04-30T10:41:03Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"coder",
"Math",
"RL",
"en",
"base_model:prithivMLmods/Eratosthenes-Polymath-14B-Instruct",
"base_model:quantized:prithivMLmods/Eratosthenes-Polymath-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-29T01:08:14Z | ---
base_model: prithivMLmods/Eratosthenes-Polymath-14B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- coder
- Math
- RL
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/prithivMLmods/Eratosthenes-Polymath-14B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Eratosthenes-Polymath-14B-Instruct-i1-GGUF/resolve/main/Eratosthenes-Polymath-14B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Willowclem/finetuned_starcoder2_3b_test_1 | Willowclem | 2025-04-30T10:37:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T10:37:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TOMFORD79/Hanx | TOMFORD79 | 2025-04-30T10:37:25Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-30T10:10:26Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
robinfaro/TiMoE-1B-fineweb_edu-70BT | robinfaro | 2025-04-30T10:36:04Z | 4 | 0 | null | [
"safetensors",
"moegpt",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"custom_code",
"region:us"
] | null | 2025-04-29T07:54:01Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
ridiviralvideo/ridi.viral.video.ridhi.video | ridiviralvideo | 2025-04-30T10:35:09Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-30T10:33:32Z | Watch 🟢 ➤ ➤ ➤ <a href="https://newvidgallery.com/rerg"> 🌐 Click Here To link (Trending+++ridi.viral.video.ridhi.video.link.arovi.nusrat.ridhi.full.video)
🔴 ➤►DOWNLOAD👉👉🟢 ➤Watch 🟢 ➤ ➤ ➤ <a href="https://newvidgallery.com/rerg"> 🌐 ridi.viral.video.ridhi.video.link.arovi.nusrat.ridhi.full.video
|
maksf8486/bb8ee146-b69a-485e-beb7-392d4059d150 | maksf8486 | 2025-04-30T10:33:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-30T09:59:52Z | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Hermes-llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bb8ee146-b69a-485e-beb7-392d4059d150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: NousResearch/Nous-Hermes-llama-2-7b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5cfb94c383f95340_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5cfb94c383f95340_train_data.json
type:
field_instruction: instruction
field_output: chosen_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: false
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: maksf8486/bb8ee146-b69a-485e-beb7-392d4059d150
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/5cfb94c383f95340_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 10dc235b-06a9-410c-a72b-3ec423544136
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 10dc235b-06a9-410c-a72b-3ec423544136
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# bb8ee146-b69a-485e-beb7-392d4059d150
This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9359 | 0.0244 | 200 | 1.0103 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MapacheFantasma/entregable2 | MapacheFantasma | 2025-04-30T10:31:07Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2025-04-30T10:31:03Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
lijinyang0226/Llama3.1_8B_fine_tuned_model_v2 | lijinyang0226 | 2025-04-30T10:22:51Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T10:21:10Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lijinyang0226
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Asit03/AI_Agent_V2_Merged | Asit03 | 2025-04-30T10:21:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T09:28:05Z | ---
pipeline_tag: text-generation
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation
- text
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Asit03
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jaruiz/q-FrozenLake-v1-4x4-noSlippery | jaruiz | 2025-04-30T10:20:54Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-30T10:20:51Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jaruiz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
PleIAs/Pleias-RAG-1B | PleIAs | 2025-04-30T10:20:09Z | 201 | 35 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"fr",
"it",
"de",
"es",
"arxiv:2504.18225",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-07T23:30:25Z | ---
base_model:
- PleIAs/Pleias-1.2B-Preview
language:
- en
- fr
- it
- de
- es
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
---
# Pleias-RAG-1B
<div align="center">
<img src="figures/pleias.jpg" width="60%" alt="Pleias" />
</div>
<p align="center">
<a href="https://arxiv.org/abs/2504.18225"><b>Full model report</b></a>
</p>
**Pleias-RAG-1B** is a 1.2 billion parameters Small Reasoning Model, trained for retrieval-augmented general (RAG), search and source summarization. Along with Pleias-RAG-1B it belongs to the first generation of Pleias specialized reasoning models.
Pleias-RAG-1B outperform most SLMs (4 billion parameters and below) on standardized benchmarks for retrieval-augmented general (HotPotQA, 2wiki) and is competitive with standard 7-8b models including Qwen-2.5-7B and Llama-3.1-8B. It is the only SLM to date to maintain consistent RAG performance across leading European languages and to ensure systematic reference grounding for statements.
<p align="center">
<img width="80%" src="figures/pleias_benchmark.png">
</p>
Due to its size, ease of deployment on constrained infrastructure (including mobile phone) and built-in support for factual and accurate information, Pleias-RAG-1B unlocks a range of new use cases for generative AI.
## Features
Pleias-RAG-1B is a specialized language model using a series of special tokens to process a structured input (query and sources) and generate a structured output (reasoning sequence and answer with sources). For easier implementation, we encourage to use the associated API library.
### Citation support
Pleias-RAG-1B natively generated grounded answers on the basis of excerpts and citations extracted from the provided sources, using a custom syntax inspired by Wikipedia (<ref></ref>) It is one a handful open weights model to date to have been developed with this feature and the first one designed for actual deployment.
<p align="center">
<img width="80%" src="figures/pleias_anthropic.png">
</p>
In contrast with Anthropic approach (*Citation mode*), citation are integrally generated by the model and are not the product of external chunking. As a result we can provide another desirable feature to simplify source checking: citation shortening for longer excerpts (using "(…)").
### RAG reasoning
Pleias-RAG-1B generates a specific reasoning sequences incorporating several proto-agentic abilities for RAG applications. The model is able to make a series of decisions directly:
* Assessing whether the query is understandable.
* Assessing whether the query is trivial enough to not require a lengthy pre-analysis (*adjustable reasoning*)
* Assessing whether the sources do contain enough input to generate a grounded answer.
<p align="center">
<img width="80%" src="figures/rag_workflow.png">
</p>
The structured reasoning traces include the following steps:
* Language detection of the query. The model will always strive to answer in the language of the original query.
* Query analysis and associated query report. The analysis can either lead to a standard answer, a shortening reasoning trace/answer for trivial question, a reformulated query or a refusal (that could in the context of the application be transformed into user input querying).
* Source analysis and associated source report. This step evaluates the coverage and depth of the provided sources in regards to the query.
* Draft of the final answer.
### Multilinguality
Pleias-RAG-1B is able to read and write in the main European languages: French, German, Italian, Spanish, Polish, Latin and Portuguese.
To date, it is the only SLM with negligible loss of performance in leading European languages for RAG-related tasks. On a translated set of HotPotQA we observed a significant drop of performance in most SLMs from 10% to 30-35% for sub-1B models.
<p align="center">
<img width="80%" src="figures/language_benchmark.png">
</p>
We do expect the results of any standard English evaluation on Pleias RAG models should be largely transferable to the main European languages limiting the costs of evaluation and deployment in multilingual settings.
## Training
Pleias-RAG-1B is trained on large synthetic dataset emulating retrieval of wide variety of multilingual open sources from Common Corpus. They provide native support for citation and grounding with literal quotes. Following on the latest trends of agentification, the models reintegrate multiple features associated with RAG workflows such as query routing, query reformulation, source reranking.
## Evaluation
Pleias-RAG-1B has been evaluated on three standard RAG benchmarks, 2wiki, HotpotQA and MuSique.
<p align="center">
<img width="80%" src="figures/benchmark.png">
</p>
All the benchmarks only assess the "trivial" mode on questions requiring some form of multi-hop reasoning over sources (answer disseminated into different sources) as well as discrimination of distractor sources.
## Deployment
The easiest way to deploy Pleias-RAG-1B is through [our official library](https://github.com/Pleias/Pleias-RAG-Library). It features an API-like workflow with standardized export of the structured reasoning/answer output into json format. A [Colab Notebook](https://colab.research.google.com/drive/1oG0qq0I1fSEV35ezSah-a335bZqmo4_7?usp=sharing) is available for easy tests and experimentations.
A typical minimal example:
```python
from rag_library import RAGWithCitations
rag = RAGWithCitations("PleIAs/Pleias-RAG-1B")
# Define query and sources
query = "What is the capital of France?"
sources = [
{
"text": "Paris is the capital and most populous city of France. With an estimated population of 2,140,526 residents as of January 2019, Paris is the center of the Île-de-France dijon metropolitan area and the hub of French economic, political, and cultural life. The city's landmarks, including the Eiffel Tower, Arc de Triomphe, and Cathedral of Notre-Dame, make it one of the world's most visited tourist destinations.",
"metadata": {"source": "Geographic Encyclopedia", "reliability": "high"}
},
{
"text": "The Eiffel Tower is located in Paris, France. It was constructed from 1887 to 1889 as the entrance to the 1889 World's Fair and was initially criticized by some of France's leading artists and intellectuals for its design. Standing at 324 meters (1,063 ft) tall, it was the tallest man-made structure in the world until the completion of the Chrysler Building in New York City in 1930. The tower receives about 7 million visitors annually and has become an iconic symbol of Paris and France.",
"metadata": {"source": "Travel Guide", "year": 2020}
}
]
# Generate a response
response = rag.generate(query, sources)
# Print the final answer with citations
print(response["processed"]["clean_answer"])
```
With expected output:
```
The capital of France is Paris. This is confirmed by multiple sources, with <|source_id|>1 explicitly stating that "Paris is the capital and most populous city of France"[1].
**Citations**
[1] "Paris is the capital and most populous city of France" [Source 1]
```
With 1.2B parameters, Pleias-RAG-1B can be readily deployed in many constrained infrastructures, including desktop systems on CPU RAM.
We also release an [unquantized GGUF version](https://huggingface.co/PleIAs/Pleias-RAG-1B-gguf) for deployment on CPU. Our internal performance benchmarks suggest that waiting times are currently acceptable for most either even under constrained RAM: about 20 seconds for a complex generation including reasoning traces on 8g RAM and below. Since the model is unquantized, quality of text generation should be identical to the original model.
Once integrated into a RAG system, Pleias-RAG-1B can also be used in a broader range of non-conversational use cases including user support or educational assistance. Through this release, we aims to make SLMs workable in production by relying systematically on an externalized memory. |
fats-fme/cabc19cf-36f9-49ce-b8ee-16a014cd6d4c | fats-fme | 2025-04-30T10:19:47Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b",
"license:mit",
"region:us"
] | null | 2025-04-30T10:03:56Z | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Hermes-llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cabc19cf-36f9-49ce-b8ee-16a014cd6d4c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-llama-2-7b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5cfb94c383f95340_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5cfb94c383f95340_train_data.json
type:
field_instruction: instruction
field_output: chosen_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/cabc19cf-36f9-49ce-b8ee-16a014cd6d4c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 130GB
max_steps: 50
micro_batch_size: 1
mlflow_experiment_name: /tmp/5cfb94c383f95340_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 10dc235b-06a9-410c-a72b-3ec423544136
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 10dc235b-06a9-410c-a72b-3ec423544136
warmup_steps: 200
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# cabc19cf-36f9-49ce-b8ee-16a014cd6d4c
This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.0770 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
PrMoriarty/ppo-LunarLander-v2 | PrMoriarty | 2025-04-30T10:16:40Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-29T17:39:15Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.81 +/- 17.22
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yasminlearn/q-FrozenLake-v1-4x4-noSlippery | yasminlearn | 2025-04-30T10:16:13Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-30T09:02:47Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="yasminlearn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jjeccles/opening-hrs-filter-qwen3b-Instruct-march25 | jjeccles | 2025-04-30T10:15:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T13:27:32Z | ---
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jjeccles
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SimpleStories/SimpleStories-30M | SimpleStories | 2025-04-30T10:14:39Z | 6 | 0 | null | [
"safetensors",
"llama",
"small-language-model",
"story-generation",
"text-generation",
"efficient-nlp",
"distilled-models",
"en",
"dataset:lennart-finke/SimpleStories",
"arxiv:2504.09184",
"license:mit",
"region:us"
] | text-generation | 2025-04-22T14:14:13Z | ---
license: mit
datasets:
- lennart-finke/SimpleStories
language:
- en
tags:
- small-language-model
- story-generation
- text-generation
- efficient-nlp
- distilled-models
---
# SimpleStories Model Family
The SimpleStories models are a tiny model family created for interpretability research, trained on the [SimpleStories dataset](https://huggingface.co/datasets/lennart-finke/SimpleStories).
## Usage
```python
import torch
from transformers import AutoTokenizer, LlamaForCausalLM
MODEL_SIZE = "30M"
model_path = "SimpleStories/SimpleStories-{}".format(MODEL_SIZE)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(model_path)
model.to("cuda")
model.eval()
prompt = "The curious cat looked at the"
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
input_ids = inputs.input_ids.to("cuda")
eos_token_id = 1
with torch.no_grad():
output_ids = model.generate(
input_ids=input_ids,
max_new_tokens=400,
temperature=0.7,
do_sample=True,
eos_token_id=eos_token_id
)
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(f"\nGenerated text:\n{output_text}")
```
## Model Variants
| Model Name | n_params | n_layers | d_model | n_heads | n_ctx | d_vocab |
|------------|----------|----------|---------|---------|-------|---------|
| SimpleStories-35M | 35 million | 12 | 512 | 8 | 512 | 4096 |
| SimpleStories-30M | 30 million | 10 | 512 | 8 | 512 | 4096 |
| SimpleStories-11M | 11 million | 6 | 384 | 6 | 512 | 4096 |
| SimpleStories-5M | 5 million | 6 | 256 | 4 | 512 | 4096 |
| SimpleStories-1.25M | 1.25 million | 4 | 128 | 4 | 512 | 4096 |
## Performance Comparison
Model-evaluated generation quality metrics:
<p align="center">
<img width="80%" src="figures/simplestories_comparison.png">
</p>
## Tokenizer
We use a custom WordPiece tokenizer with a small vocabulary size of 4096. We conducted morphological analysis and coverage gain analysis on the dataset
to build a small tokenizer without compromising on the quality of generation.
## Dataset
The SimpleStories dataset is a collection of short stories generated by state-of-the-art language models. It features:
- Story annotation with high-level concepts: theme, topic, style, etc.
- Higher semantic and syntactic diversity through seeded story generation
- Generated by 2024 models
- Several NLP-metrics pre-computed to aid filtering
- ASCII-only guarantee for the English dataset
Read the dataset paper on [arXiv](https://arxiv.org/abs/2504.09184).
## Training
The training and evaluation scripts can be accessed at https://github.com/danbraunai/simple_stories_train
|
phansynguyen98/mix_part_4 | phansynguyen98 | 2025-04-30T10:14:08Z | 0 | 0 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-04-30T08:33:13Z | ---
license: apache-2.0
---
|
SimpleStories/SimpleStories-5M | SimpleStories | 2025-04-30T10:14:00Z | 6 | 0 | null | [
"safetensors",
"llama",
"small-language-model",
"story-generation",
"text-generation",
"efficient-nlp",
"distilled-models",
"en",
"dataset:lennart-finke/SimpleStories",
"arxiv:2504.09184",
"license:mit",
"region:us"
] | text-generation | 2025-04-22T14:18:59Z | ---
license: mit
datasets:
- lennart-finke/SimpleStories
language:
- en
tags:
- small-language-model
- story-generation
- text-generation
- efficient-nlp
- distilled-models
---
# SimpleStories Model Family
The SimpleStories models are a tiny model family created for interpretability research, trained on the [SimpleStories dataset](https://huggingface.co/datasets/lennart-finke/SimpleStories).
## Usage
```python
import torch
from transformers import AutoTokenizer, LlamaForCausalLM
MODEL_SIZE = "5M"
model_path = "SimpleStories/SimpleStories-{}".format(MODEL_SIZE)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(model_path)
model.to("cuda")
model.eval()
prompt = "The curious cat looked at the"
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
input_ids = inputs.input_ids.to("cuda")
eos_token_id = 1
with torch.no_grad():
output_ids = model.generate(
input_ids=input_ids,
max_new_tokens=400,
temperature=0.7,
do_sample=True,
eos_token_id=eos_token_id
)
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(f"\nGenerated text:\n{output_text}")
```
## Model Variants
| Model Name | n_params | n_layers | d_model | n_heads | n_ctx | d_vocab |
|------------|----------|----------|---------|---------|-------|---------|
| SimpleStories-35M | 35 million | 12 | 512 | 8 | 512 | 4096 |
| SimpleStories-30M | 30 million | 10 | 512 | 8 | 512 | 4096 |
| SimpleStories-11M | 11 million | 6 | 384 | 6 | 512 | 4096 |
| SimpleStories-5M | 5 million | 6 | 256 | 4 | 512 | 4096 |
| SimpleStories-1.25M | 1.25 million | 4 | 128 | 4 | 512 | 4096 |
## Performance Comparison
Model-evaluated generation quality metrics:
<p align="center">
<img width="80%" src="figures/simplestories_comparison.png">
</p>
## Tokenizer
We use a custom WordPiece tokenizer with a small vocabulary size of 4096. We conducted morphological analysis and coverage gain analysis on the dataset
to build a small tokenizer without compromising on the quality of generation.
## Dataset
The SimpleStories dataset is a collection of short stories generated by state-of-the-art language models. It features:
- Story annotation with high-level concepts: theme, topic, style, etc.
- Higher semantic and syntactic diversity through seeded story generation
- Generated by 2024 models
- Several NLP-metrics pre-computed to aid filtering
- ASCII-only guarantee for the English dataset
Read the dataset paper on [arXiv](https://arxiv.org/abs/2504.09184).
## Training
The training and evaluation scripts can be accessed at https://github.com/danbraunai/simple_stories_train
|
SimpleStories/SimpleStories-1.25M | SimpleStories | 2025-04-30T10:13:41Z | 4 | 0 | null | [
"safetensors",
"llama",
"small-language-model",
"story-generation",
"text-generation",
"efficient-nlp",
"distilled-models",
"en",
"dataset:lennart-finke/SimpleStories",
"arxiv:2504.09184",
"license:mit",
"region:us"
] | text-generation | 2025-04-22T14:21:12Z | ---
license: mit
datasets:
- lennart-finke/SimpleStories
language:
- en
tags:
- small-language-model
- story-generation
- text-generation
- efficient-nlp
- distilled-models
---
# SimpleStories Model Family
The SimpleStories models are a tiny model family created for interpretability research, trained on the [SimpleStories dataset](https://huggingface.co/datasets/lennart-finke/SimpleStories).
## Usage
```python
import torch
from transformers import AutoTokenizer, LlamaForCausalLM
MODEL_SIZE = "1.25M"
model_path = "SimpleStories/SimpleStories-{}".format(MODEL_SIZE)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(model_path)
model.to("cuda")
model.eval()
prompt = "The curious cat looked at the"
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
input_ids = inputs.input_ids.to("cuda")
eos_token_id = 1
with torch.no_grad():
output_ids = model.generate(
input_ids=input_ids,
max_new_tokens=400,
temperature=0.7,
do_sample=True,
eos_token_id=eos_token_id
)
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(f"\nGenerated text:\n{output_text}")
```
## Model Variants
| Model Name | n_params | n_layers | d_model | n_heads | n_ctx | d_vocab |
|------------|----------|----------|---------|---------|-------|---------|
| SimpleStories-35M | 35 million | 12 | 512 | 8 | 512 | 4096 |
| SimpleStories-30M | 30 million | 10 | 512 | 8 | 512 | 4096 |
| SimpleStories-11M | 11 million | 6 | 384 | 6 | 512 | 4096 |
| SimpleStories-5M | 5 million | 6 | 256 | 4 | 512 | 4096 |
| SimpleStories-1.25M | 1.25 million | 4 | 128 | 4 | 512 | 4096 |
## Performance Comparison
Model-evaluated generation quality metrics:
<p align="center">
<img width="80%" src="figures/simplestories_comparison.png">
</p>
## Tokenizer
We use a custom WordPiece tokenizer with a small vocabulary size of 4096. We conducted morphological analysis and coverage gain analysis on the dataset
to build a small tokenizer without compromising on the quality of generation.
## Dataset
The SimpleStories dataset is a collection of short stories generated by state-of-the-art language models. It features:
- Story annotation with high-level concepts: theme, topic, style, etc.
- Higher semantic and syntactic diversity through seeded story generation
- Generated by 2024 models
- Several NLP-metrics pre-computed to aid filtering
- ASCII-only guarantee for the English dataset
Read the dataset paper on [arXiv](https://arxiv.org/abs/2504.09184).
## Training
The training and evaluation scripts can be accessed at https://github.com/danbraunai/simple_stories_train
|
Qwe1325/gemma2-2b-it-tw-Q4_K_M-GGUF | Qwe1325 | 2025-04-30T10:13:33Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"dataset:yentinglin/TaiwanChat",
"base_model:jslin09/gemma2-2b-it-tw",
"base_model:quantized:jslin09/gemma2-2b-it-tw",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-30T10:13:24Z | ---
base_model: jslin09/gemma2-2b-it-tw
datasets:
- yentinglin/TaiwanChat
language:
- zh
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Qwe1325/gemma2-2b-it-tw-Q4_K_M-GGUF
This model was converted to GGUF format from [`jslin09/gemma2-2b-it-tw`](https://huggingface.co/jslin09/gemma2-2b-it-tw) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jslin09/gemma2-2b-it-tw) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Qwe1325/gemma2-2b-it-tw-Q4_K_M-GGUF --hf-file gemma2-2b-it-tw-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Qwe1325/gemma2-2b-it-tw-Q4_K_M-GGUF --hf-file gemma2-2b-it-tw-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Qwe1325/gemma2-2b-it-tw-Q4_K_M-GGUF --hf-file gemma2-2b-it-tw-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Qwe1325/gemma2-2b-it-tw-Q4_K_M-GGUF --hf-file gemma2-2b-it-tw-q4_k_m.gguf -c 2048
```
|
kjsbrian/mango-recall-classifier | kjsbrian | 2025-04-30T10:10:47Z | 57 | 0 | null | [
"safetensors",
"electra",
"text-classification",
"license:mit",
"region:us"
] | text-classification | 2025-04-26T02:42:48Z | ---
license: mit
pipeline_tag: text-classification
--- |
phililp-arnold/e7b9dacf-78fb-495a-a9d1-bac12f5ec105 | phililp-arnold | 2025-04-30T10:07:47Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"region:us"
] | null | 2025-04-30T10:07:17Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
model-index:
- name: phililp-arnold/e7b9dacf-78fb-495a-a9d1-bac12f5ec105
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phililp-arnold/e7b9dacf-78fb-495a-a9d1-bac12f5ec105
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF | mradermacher | 2025-04-30T10:07:15Z | 3,585 | 2 | transformers | [
"transformers",
"gguf",
"medical",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"base_model:Joker-sxj/Qwen2.5-3B-instruct-medical-finetuned",
"base_model:quantized:Joker-sxj/Qwen2.5-3B-instruct-medical-finetuned",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-03T06:44:14Z | ---
base_model: Joker-sxj/Qwen2.5-3B-instruct-medical-finetuned
datasets:
- FreedomIntelligence/medical-o1-reasoning-SFT
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- medical
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Joker-sxj/Qwen2.5-3B-instruct-medical-finetuned
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-IQ2_S.gguf) | i1-IQ2_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-Q2_K.gguf) | i1-Q2_K | 1.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-IQ3_M.gguf) | i1-IQ3_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-Q4_0.gguf) | i1-Q4_0 | 1.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-Q4_1.gguf) | i1-Q4_1 | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-instruct-medical-finetuned-i1-GGUF/resolve/main/Qwen2.5-3B-instruct-medical-finetuned.i1-Q6_K.gguf) | i1-Q6_K | 2.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
elliotthwangmsa/Kimlam-OpenChat-tw | elliotthwangmsa | 2025-04-30T10:06:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T09:54:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
loss: 0.3209
繁體中文 客製訓練
|
18-Jobz-Hunting-Sajal-Malik-Viral-Video-Xn/Full.Clip.Jobz.Hunting.Sajal.Malik.Viral.Video.Original.Link | 18-Jobz-Hunting-Sajal-Malik-Viral-Video-Xn | 2025-04-30T10:05:50Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-30T10:04:52Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5n7shfr3?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Sajal Malik's viral video is trending across social media, sparking widespread interest. This post covers what’s actually happening, separating facts from speculation. We dive into how the video gained traction, public reactions, and why it’s making headlines. This article strictly follows Blogger and AdSense guidelines, offering an educational and respectful analysis. Learn what’s true, what’s exaggerated, and why it matters in the age of viral content. Stay informed and avoid misinformation by reading the full story behind the Sajal Malik viral video trending
|
hZzy/mistral-7b-expo-7b-DPO-25-last-try-1 | hZzy | 2025-04-30T10:05:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"ndcg",
"trl",
"expo",
"generated_from_trainer",
"dataset:hZzy/direction_right2",
"base_model:hZzy/mistral-7b-sft-25-1",
"base_model:adapter:hZzy/mistral-7b-sft-25-1",
"license:apache-2.0",
"region:us"
] | null | 2025-04-30T02:14:18Z | ---
base_model: hZzy/mistral-7b-sft-25-1
datasets:
- hZzy/direction_right2
library_name: peft
license: apache-2.0
tags:
- alignment-handbook
- ndcg
- trl
- expo
- generated_from_trainer
model-index:
- name: mistral-7b-expo-7b-DPO-25-last-try-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-expo-7b-DPO-25-last-try-1
This model is a fine-tuned version of [hZzy/mistral-7b-sft-25-1](https://huggingface.co/hZzy/mistral-7b-sft-25-1) on the hZzy/direction_right2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6002
- Objective: 0.6139
- Logp Accuracy: 0.6636
- Log Diff Policy: 43.6726
- Chosen Logps: -309.4410
- Rejected Logps: -353.1136
- Logits: -1.3787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 12
- total_train_batch_size: 108
- total_eval_batch_size: 9
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Objective | Logp Accuracy | Log Diff Policy | Chosen Logps | Rejected Logps | Logits |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------------:|:---------------:|:------------:|:--------------:|:-------:|
| 0.693 | 0.0758 | 50 | 0.6930 | 0.6930 | 0.5154 | 0.4047 | -93.9515 | -94.3562 | -2.1995 |
| 0.6917 | 0.1517 | 100 | 0.6921 | 0.6921 | 0.5190 | 0.6108 | -92.2034 | -92.8142 | -2.2036 |
| 0.6865 | 0.2275 | 150 | 0.6868 | 0.6872 | 0.5366 | 1.7198 | -92.9224 | -94.6422 | -2.1207 |
| 0.6507 | 0.3033 | 200 | 0.6631 | 0.6684 | 0.5845 | 9.5136 | -127.5494 | -137.0630 | -1.8213 |
| 0.629 | 0.3792 | 250 | 0.6505 | 0.6583 | 0.6035 | 15.4656 | -131.4656 | -146.9312 | -1.8424 |
| 0.634 | 0.4550 | 300 | 0.6336 | 0.6415 | 0.6244 | 23.3148 | -187.6798 | -210.9946 | -1.6750 |
| 0.5837 | 0.5308 | 350 | 0.6326 | 0.6470 | 0.6331 | 32.9779 | -242.8130 | -275.7909 | -1.6081 |
| 0.5783 | 0.6067 | 400 | 0.6269 | 0.6363 | 0.6451 | 32.5418 | -177.1183 | -209.6601 | -1.7388 |
| 0.5749 | 0.6825 | 450 | 0.6155 | 0.6246 | 0.6499 | 36.7054 | -217.9877 | -254.6931 | -1.6474 |
| 0.5651 | 0.7583 | 500 | 0.6151 | 0.6275 | 0.6527 | 43.9688 | -287.4218 | -331.3907 | -1.6310 |
| 0.5515 | 0.8342 | 550 | 0.6107 | 0.6214 | 0.6602 | 44.2664 | -323.9571 | -368.2235 | -1.4372 |
| 0.5467 | 0.9100 | 600 | 0.6016 | 0.6105 | 0.6681 | 43.5348 | -248.7065 | -292.2413 | -1.4585 |
| 0.5926 | 0.9858 | 650 | 0.6003 | 0.6130 | 0.6653 | 41.5848 | -276.2677 | -317.8525 | -1.5049 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.20.3 |
ail-sa/akshey_1photo_test1 | ail-sa | 2025-04-30T10:03:28Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-30T09:25:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Sid
---
# Akshey_1Photo_Test1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Sid` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Sid",
"lora_weights": "https://huggingface.co/ail-sa/akshey_1photo_test1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ail-sa/akshey_1photo_test1', weight_name='lora.safetensors')
image = pipeline('Sid').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ail-sa/akshey_1photo_test1/discussions) to add images that show off what you’ve made with this LoRA.
|
prithivMLmods/Qwen3-4B-ft-bf16 | prithivMLmods | 2025-04-30T10:01:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"moe",
"moderately abliterated variant",
"text-generation-inference",
"conversational",
"en",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T09:29:21Z | ---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-4B
pipeline_tag: text-generation
library_name: transformers
tags:
- moe
- moderately abliterated variant
- text-generation-inference
---

# **Qwen3-4B-ft-bf16**
> **Qwen3-4B-ft-bf16** is a fine-tuned, moderately abliterated version of the Qwen3-4B model. Designed for **enhanced context awareness** and **controlled expressiveness**, this model balances precision with creativity across a wide range of tasks—from complex reasoning to natural dialogue, code generation, and multilingual understanding.
### Key Features:
- **Improved Context Awareness**
Retains and utilizes long-range contextual information effectively, making it ideal for long-form conversations, document understanding, and summarization tasks.
- **Moderate Abliteration**
Introduces measured behavioral flexibility that enhances creativity and adaptability while maintaining reliability, alignment, and safety in outputs.
- **Dual Thinking Modes**
Supports dynamic switching between *thinking* mode (for math, logic, and coding) and *non-thinking* mode (for general-purpose conversations), ensuring optimal task matching.
- **Multilingual Mastery**
Excels in over 100 languages and dialects for translation, multilingual chat, and cross-lingual reasoning.
- **Tool-Ready Agent Capabilities**
Designed to integrate with tool APIs and complex workflows, with consistent performance in both thinking and non-thinking contexts.
---
## Quickstart with Hugging Face Transformers🤗
```bash
pip install transformers==4.51.3
pip install huggingface_hub[hf_xet]
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Qwen3-4B-ft-bf16"
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# Define input
prompt = "Describe how renewable energy impacts economic development."
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate output
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# Parse thinking content
try:
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip()
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip()
print("thinking content:", thinking_content)
print("content:", content)
```
---
## Best Practices
- **Sampling Settings**:
- *Thinking mode*: `temperature=0.6`, `top_p=0.95`, `top_k=20`
- *Non-thinking mode*: `temperature=0.7`, `top_p=0.8`, `top_k=20`
- **Token Length**:
- Standard: `32768 tokens`
- Extended Reasoning Tasks: `up to 38912 tokens`
- **Prompt Design**:
- **Math Problems**: Add `"Please reason step by step, and put your final answer within \boxed{}."`
- **MCQs**: Format answers as `{"answer": "B"}` for easy parsing.
- **Multi-turn**: Omit thinking logs in conversation history for cleaner context. |
kush137/astrophysics_adapted_llama_3.1_8b | kush137 | 2025-04-30T10:01:19Z | 0 | 0 | transformers | [
"transformers",
"llama",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-29T14:18:50Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** kush137
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
WTNLXTBL/Qwen3-4B-Base-Q4_K_M-GGUF | WTNLXTBL | 2025-04-30T10:01:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Qwen/Qwen3-4B-Base",
"base_model:quantized:Qwen/Qwen3-4B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-30T10:00:55Z | ---
base_model: Qwen/Qwen3-4B-Base
library_name: transformers
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# WTNLXTBL/Qwen3-4B-Base-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-4B-Base`](https://huggingface.co/Qwen/Qwen3-4B-Base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-4B-Base) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo WTNLXTBL/Qwen3-4B-Base-Q4_K_M-GGUF --hf-file qwen3-4b-base-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo WTNLXTBL/Qwen3-4B-Base-Q4_K_M-GGUF --hf-file qwen3-4b-base-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo WTNLXTBL/Qwen3-4B-Base-Q4_K_M-GGUF --hf-file qwen3-4b-base-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo WTNLXTBL/Qwen3-4B-Base-Q4_K_M-GGUF --hf-file qwen3-4b-base-q4_k_m.gguf -c 2048
```
|
prithivMLmods/Qwen3-4B-ft-bf16-Q8_0-GGUF | prithivMLmods | 2025-04-30T10:00:56Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"moe",
"moderately abliterated variant",
"llama-cpp",
"gguf-my-repo",
"Qwen3",
"text-generation",
"en",
"base_model:prithivMLmods/Qwen3-4B-ft-bf16",
"base_model:quantized:prithivMLmods/Qwen3-4B-ft-bf16",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-30T09:56:50Z | ---
base_model: prithivMLmods/Qwen3-4B-ft-bf16
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- text-generation-inference
- moe
- moderately abliterated variant
- llama-cpp
- gguf-my-repo
- Qwen3
---
# prithivMLmods/Qwen3-4B-ft-bf16-Q8_0-GGUF
This model was converted to GGUF format from [`prithivMLmods/Qwen3-4B-ft-bf16`](https://huggingface.co/prithivMLmods/Qwen3-4B-ft-bf16) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/prithivMLmods/Qwen3-4B-ft-bf16) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo prithivMLmods/Qwen3-4B-ft-bf16-Q8_0-GGUF --hf-file qwen3-4b-ft-bf16-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo prithivMLmods/Qwen3-4B-ft-bf16-Q8_0-GGUF --hf-file qwen3-4b-ft-bf16-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo prithivMLmods/Qwen3-4B-ft-bf16-Q8_0-GGUF --hf-file qwen3-4b-ft-bf16-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo prithivMLmods/Qwen3-4B-ft-bf16-Q8_0-GGUF --hf-file qwen3-4b-ft-bf16-q8_0.gguf -c 2048
``` |
Subsets and Splits