modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
fhswf/tiny-stack-tokenizer
|
fhswf
| 2025-08-11T10:06:56Z | 42 | 0 | null |
[
"gpt2",
"region:us"
] | null | 2025-02-17T11:11:50Z |
# TinyStack Tokenizer
ByteLevel BPE tokenizer trained on fhswf/tiny-stack dataset.
## Usage
```python
from tokenizers.implementations import ByteLevelBPETokenizer
from tokenizers.processors import BertProcessing
tokenizer = ByteLevelBPETokenizer("./vocab.json", "./merges.txt")
tokenizer._tokenizer.post_processor = BertProcessing(
("</s>", tokenizer.token_to_id("</s>")),
("<s>", tokenizer.token_to_id("<s>")),
)
tokenizer.enable_truncation(max_length=512)
```
Vocab size: 52000
|
BytedanceDouyinContent/SAIL-VL-1d7-Thinking-2B-2507
|
BytedanceDouyinContent
| 2025-08-11T10:01:17Z | 0 | 0 | null |
[
"safetensors",
"internvl_chat",
"custom_code",
"en",
"zh",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T10:00:35Z |
---
license: apache-2.0
language:
- en
- zh
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
---
## Introduction
Introducing **SAIL-VL-1.7-Thinking-2507**, our latest reasoning model that achieves SOTA on the OpenCompass reasoning benchmark among comparably-sized models. Its architecture combines a SAILVIT vision encoder with the Qwen3-2B/7B language model, trained using the DAPO algorithm on a curated dataset of over 70,000 multimodal STEM examples. We are releasing this model open-source to facilitate community.
## Performance
| Model | Size | Average | DynaMath | LogicVista | MathVerse | MathVision | WeMath | MathVista_MINI |
| :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- |
| VLAA-Thinker-3B (Previous SOTA) | 3B | 35.4 | 18.2 | 38.5 | 36.4 | 24.4 | **33.8** | 61.0 |
| InternVL3-2B | 2B | 29.1 | 14.8 | 34.7 | 24.5 | 20.2 | 22.9 | 57.6 |
| Qwen2.5-VL-3B | 3B | 31.8 | 13.2 | **40.3** | 31.2 | 21.9 | 22.9 | 61.2 |
| **SAIL-VL-1.7-Thinking-2B-2507** | **2B** | **36.2** | **19.4** | 35.8 | **42.3** | **24.5** | 27.4 | **67.7** |
| WeThink-7B (Previous SOTA) | 8B | 44.3 | 24.8 | **51.2** | 44.2 | 26.0 | **48.0** | 71.6 |
| InternVL3-8B | 8B | 41.4 | 25.7 | 44.5 | 38.5 | 30.0 | 39.5 | 70.5 |
| Qwen2.5-VL-7B | 7B | 40.1 | 21.8 | 47.9 | 41.1 | 25.4 | 36.2 | 68.1 |
| **SAIL-VL-1.7-Thinking-8B-2507** | **8B** | **45.8** | **29.6** | 43.6 | **57.1** | **31.6** | 39.62 | **73.4** |
## Inference
We introduce how to use our model at inference stage using transformers library. It requires einops, transformers and timm.
```python
import numpy as np
import torch
import torchvision.transforms as T
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=10, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=10):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
path = "BytedanceDouyinContent/SAIL-VL-1d7-Thinking-2B-2507"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
trust_remote_code=True).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
# set the max number of tiles in `max_num`
pixel_values = load_image('31443256.jpg', max_num=10).to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens=1024, do_sample=True)
# pure-text conversation
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question} Assistant: {response}')
question = 'Can you tell me a story?'
response, history = model.chat(tokenizer, None, question, generation_config, history=history, return_history=True)
print(f'User: {question} Assistant: {response}')
# single-image single-round conversation
question = '<image> Please describe the image shortly.'
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(f'User: {question} Assistant: {response}')
# single-image multi-round conversation
question = '<image> Please describe the image in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(f'User: {question} Assistant: {response}')
question = 'Please write a poem according to the image.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(f'User: {question} Assistant: {response}')
```
## License
This project is licensed under [Apache License 2.0](LICENSE).
## Contact
If you have any question, please feel free to contact us: [email protected]
|
nilli2038/blockassist-bc-gentle_gregarious_mouse_1754905350
|
nilli2038
| 2025-08-11T09:43:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle gregarious mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:42:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle gregarious mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Userb1az/Qwen3-30B-A3B-GGUF
|
Userb1az
| 2025-08-11T09:41:12Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"arxiv:2309.00071",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-30B-A3B-Base",
"base_model:quantized:Qwen/Qwen3-30B-A3B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-11T08:46:22Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-30B-A3B-Base
---
# Qwen3-30B-A3B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-30B-A3B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 30.5B in total and 3.3B activated
- Number of Paramaters (Non-Embedding): 29.9B
- Number of Layers: 48
- Number of Attention Heads (GQA): 32 for Q and 4 for KV
- Number of Experts: 128
- Number of Activated Experts: 8
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3-MoE has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3_moe'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-30B-A3B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-30B-A3B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-30B-A3B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-30B-A3B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-30B-A3B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
SelmaNajih001/results2
|
SelmaNajih001
| 2025-08-11T09:38:53Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"longformer",
"text-classification",
"generated_from_trainer",
"base_model:allenai/longformer-base-4096",
"base_model:finetune:allenai/longformer-base-4096",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-10T11:05:38Z |
---
library_name: transformers
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results2
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1156
- Accuracy: 0.9712
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2116 | 1.0 | 367 | 0.0978 | 0.9652 |
| 0.0917 | 2.0 | 734 | 0.1043 | 0.9671 |
| 0.0679 | 3.0 | 1101 | 0.0930 | 0.9686 |
| 0.0546 | 4.0 | 1468 | 0.1007 | 0.9693 |
| 0.0417 | 5.0 | 1835 | 0.1227 | 0.9695 |
| 0.0331 | 6.0 | 2202 | 0.1156 | 0.9712 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
pdjack/roberta-base-klue-ynat-classification
|
pdjack
| 2025-08-11T09:32:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T09:32:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp_pnas_layer_20_4_all_37_0.0001_2560_1
|
winnieyangwannan
| 2025-08-11T09:27:06Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T01:51:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nullifier00/blockassist-bc-slimy_lanky_bison_1754902976
|
Nullifier00
| 2025-08-11T09:26:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slimy lanky bison",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:26:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slimy lanky bison
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1754902715
|
koloni
| 2025-08-11T09:24:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:24:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ashupasaya/blockassist-bc-scruffy_chattering_bat_1754904123
|
ashupasaya
| 2025-08-11T09:23:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy chattering bat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:22:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy chattering bat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fatmhd1995/phi35_ft_llm_4_annotation_lora_rnd1
|
fatmhd1995
| 2025-08-11T09:22:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T09:22:43Z |
---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** fatmhd1995
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aleebaster/blockassist-bc-sly_eager_boar_1754900967
|
aleebaster
| 2025-08-11T08:56:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T08:56:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yujiangw/Qwen3-1.7B-GRPO
|
yujiangw
| 2025-08-11T08:40:23Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-01T22:05:46Z |
---
library_name: transformers
model_name: Qwen3-1.7B-GRPO
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen3-1.7B-GRPO
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="yujiangw/Qwen3-1.7B-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yujiangw-carnegie-mellon-university/huggingface/runs/0scpjf6g)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
fengpeisheng1/Qwen3-4B-Instruct-2507-20250808-233922-0-IQ4_NL-GGUF
|
fengpeisheng1
| 2025-08-11T08:14:57Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"model-merging",
"mergekit",
"lazymergekit",
"qwen3",
"4b",
"text-generation",
"causal-lm",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:Idavidrein/gpqa",
"base_model:ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0",
"base_model:merge:ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-08-11T08:14:43Z |
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- merge
- model-merging
- mergekit
- lazymergekit
- qwen3
- 4b
- text-generation
- causal-lm
- llama-cpp
- gguf-my-repo
datasets:
- Idavidrein/gpqa
metrics:
- accuracy
base_model: ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0
base_model_relation: merge
model-index:
- name: qwen3-4b-merged---configuration-1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (Massive Multitask Language Understanding)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: accuracy
value: 72.51
name: MMLU (5-shot)
verified: false
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (Graduate-level Physics Q&A)
type: Idavidrein/gpqa
config: gpqa_diamond
split: test
args:
num_few_shot: 0
metrics:
- type: accuracy
value: 45.45
name: GPQA Diamond (0-shot)
verified: false
---
# fengpeisheng1/Qwen3-4B-Instruct-2507-20250808-233922-0-IQ4_NL-GGUF
This model was converted to GGUF format from [`ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0`](https://huggingface.co/ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo fengpeisheng1/Qwen3-4B-Instruct-2507-20250808-233922-0-IQ4_NL-GGUF --hf-file qwen3-4b-instruct-2507-20250808-233922-0-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo fengpeisheng1/Qwen3-4B-Instruct-2507-20250808-233922-0-IQ4_NL-GGUF --hf-file qwen3-4b-instruct-2507-20250808-233922-0-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo fengpeisheng1/Qwen3-4B-Instruct-2507-20250808-233922-0-IQ4_NL-GGUF --hf-file qwen3-4b-instruct-2507-20250808-233922-0-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo fengpeisheng1/Qwen3-4B-Instruct-2507-20250808-233922-0-IQ4_NL-GGUF --hf-file qwen3-4b-instruct-2507-20250808-233922-0-iq4_nl-imat.gguf -c 2048
```
|
hin123123/theralingua-mistral-7b-word
|
hin123123
| 2025-08-11T08:06:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:mistralai/Mistral-7B-v0.3",
"lora",
"transformers",
"text-generation",
"base_model:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-11T02:48:04Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.3
tags:
- base_model:adapter:mistralai/Mistral-7B-v0.3
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: theralingua-mistral-7b-word
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# theralingua-mistral-7b-word
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3191
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2433 | 12.5 | 50 | 0.3253 |
| 0.1813 | 25.0 | 100 | 0.3191 |
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.0
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.2
|
SNUMPR/Terran-c
|
SNUMPR
| 2025-08-11T07:51:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2025-08-11T07:37:04Z |
---
language:
- en
library_name: transformers
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.51.3
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
- Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="SNUMPR/Terran-c",
torch_dtype="auto",
trust_remote_code=True,
device_map={"": "cuda:0"},
token=True,
)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 4096
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
messages = [
{"role": "user", "content": "Hi, how are you?"},
{"role": "assistant", "content": "I'm doing great, how about you?"},
{"role": "user", "content": "Why is drinking water so healthy?"},
]
res = generate_text(
messages,
renormalize_logits=True
)
print(res[0]["generated_text"][-1]['content'])
```
You can print a sample prompt after applying chat template to see how it is feed to the tokenizer:
```python
print(generate_text.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
))
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "SNUMPR/Terran-c" # either local folder or Hugging Face model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
messages = [
{"role": "user", "content": "Hi, how are you?"},
{"role": "assistant", "content": "I'm doing great, how about you?"},
{"role": "user", "content": "Why is drinking water so healthy?"},
]
tokenizer = AutoTokenizer.from_pretrained(
model_name,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
# generate configuration can be modified to your needs
# model.generation_config.min_new_tokens = 2
# model.generation_config.max_new_tokens = 4096
# model.generation_config.do_sample = False
# model.generation_config.num_beams = 1
# model.generation_config.temperature = float(0.0)
# model.generation_config.repetition_penalty = float(1.0)
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
).to("cuda")
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
Qwen3ForCausalLM(
(model): Qwen3Model(
(embed_tokens): Embedding(151936, 2048, padding_idx=151643)
(layers): ModuleList(
(0-27): 28 x Qwen3DecoderLayer(
(self_attn): Qwen3Attention(
(q_proj): Linear(in_features=2048, out_features=2048, bias=False)
(k_proj): Linear(in_features=2048, out_features=1024, bias=False)
(v_proj): Linear(in_features=2048, out_features=1024, bias=False)
(o_proj): Linear(in_features=2048, out_features=2048, bias=False)
(q_norm): Qwen3RMSNorm((128,), eps=1e-06)
(k_norm): Qwen3RMSNorm((128,), eps=1e-06)
)
(mlp): Qwen3MLP(
(gate_proj): Linear(in_features=2048, out_features=6144, bias=False)
(up_proj): Linear(in_features=2048, out_features=6144, bias=False)
(down_proj): Linear(in_features=6144, out_features=2048, bias=False)
(act_fn): SiLU()
)
(input_layernorm): Qwen3RMSNorm((2048,), eps=1e-06)
(post_attention_layernorm): Qwen3RMSNorm((2048,), eps=1e-06)
)
)
(norm): Qwen3RMSNorm((2048,), eps=1e-06)
(rotary_emb): Qwen3RotaryEmbedding()
)
(lm_head): Linear(in_features=2048, out_features=151936, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
atifjutt131/Trader
|
atifjutt131
| 2025-08-11T07:50:02Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-08-11T07:50:02Z |
---
license: bigscience-openrail-m
---
|
kurakurai/Luth-1.7B-Instruct
|
kurakurai
| 2025-08-11T07:38:50Z | 17 | 3 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"fr",
"en",
"dataset:kurakurai/luth-sft",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T07:25:42Z |
---
library_name: transformers
license: apache-2.0
datasets:
- kurakurai/luth-sft
language:
- fr
- en
base_model:
- Qwen/Qwen3-1.7B
pipeline_tag: text-generation
---

---
# Luth-1.7B-Instruct
**Luth-1.7B-Instruct** is a French fine-tuned version of [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B), trained on the [Luth-SFT](https://huggingface.co/datasets/kurakurai/luth-sft) dataset. The model has drastically improved its French capabilities in instruction following, math, and general knowledge. Additionally, its English capabilities have remained stable and have even increased in some areas.
Our Evaluation, training and data scripts are available on [GitHub](https://github.com/kurakurai/Luth), along with the [Blog](https://huggingface.co/blog/MaxLSB/luth) we wrote.
## Model Details
Luth was trained using full fine-tuning on the Luth-SFT dataset with [Axolotl](https://github.com/axolotl-ai-cloud/axolotl). The resulting model was then merged with the base Qwen3-1.7B model. This process successfully retained the model's English capabilities while improving its performance on most selected benchmarks in both French and English.
## Benchmark Results
We used LightEval for evaluation, with custom tasks for the French benchmarks. The models were evaluated with a `temperature=0`.
### Evaluation Visualizations
**French Evaluation:**

**English Evaluation:**

### French Benchmark Scores
| Benchmark | Qwen3-1.7B | SmolLM2-1.7B-Instruct | Qwen2.5-1.5B-Instruct | Luth-1.7B-Instruct |
|-------------------|------------------|-----------------------|-----------------------|----------------------|
| ifeval-fr | 54.53 | 31.24 | 32.90 | <u>57.67</u> |
| gpqa-diamond-fr | 26.90 | 21.83 | 28.93 | <u>38.58</u> |
| mmlu-fr | 28.46 | 33.73 | 46.25 | <u>49.66</u> |
| math-500-fr | 60.80 | 11.20 | 32.20 | <u>64.00</u> |
| arc-chall-fr | 33.28 | 28.57 | 32.68 | <u>35.16</u> |
| hellaswag-fr | 24.86 | <u>49.58</u> | 34.34 | 31.93 |
### English Benchmark Scores
| Benchmark | Qwen3-1.7B | SmolLM2-1.7B-Instruct | Qwen2.5-1.5B-Instruct | Luth-1.7B-Instruct |
|-------------------|------------------|-----------------------|-----------------------|----------------------|
| ifeval-en | <u>68.39</u> | 48.24 | 39.93 | 65.80 |
| gpqa-diamond-en | <u>31.82</u> | 24.75 | 30.30 | 31.82 |
| mmlu-en | 52.74 | 50.27 | 59.81 | <u>60.19</u> |
| math-500-en | 69.20 | 22.40 | 56.00 | <u>70.00</u> |
| arc-chall-en | 36.09 | 42.32 | 41.04 | <u>42.24</u> |
| hellaswag-en | 46.96 | <u>66.94</u> | 64.48 | 58.55 |
## Citation
```bibtex
@misc{luth2025kurakurai,
title = {Luth-1.7B-Instruct},
author = {Kurakura AI Team},
year = {2025},
howpublished = {\url{https://huggingface.co/kurakurai/Luth-0.6B}},
note = {Qwen3-1.7B fine-tuned on French datasets}
}
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754897848
|
IvanJAjebu
| 2025-08-11T07:38:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T07:38:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1754897310
|
roeker
| 2025-08-11T07:29:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T07:29:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tachytelicdetonation/medgemma-27b-it-fp8-static
|
tachytelicdetonation
| 2025-08-11T07:14:43Z | 0 | 0 | null |
[
"safetensors",
"gemma3_text",
"medical",
"quantized",
"fp8",
"static",
"llm-compressor",
"vllm",
"medgemma",
"text-generation",
"conversational",
"en",
"license:gemma",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-08-11T07:07:09Z |
---
license: gemma
tags:
- medical
- quantized
- fp8
- static
- llm-compressor
- vllm
- medgemma
base_model: google/medgemma2-27b-it
language:
- en
pipeline_tag: text-generation
---
# MedGemma 27B Instruct - FP8 Static
## Model Description
This is an FP8 Static quantized version of MedGemma 27B Instruct, optimized for efficient inference while maintaining model quality.
## Quantization Details
- **Quantization Type**: FP8 Static
- **Method**: LLM Compressor
- **Original Model**: google/medgemma2-27b-it
- **Model Size**: ~27GB (reduced from ~54GB)
- **Precision**: 8-bit floating point
### FP8 Static Characteristics
- **Static Quantization**: Pre-computed scales for faster inference with minimal accuracy loss
- **Optimized for**: vLLM inference engine
## Usage with vLLM
```python
from vllm import LLM, SamplingParams
# Initialize the model
llm = LLM(
model="YOUR_USERNAME/medgemma-27b-it-fp8-static",
tensor_parallel_size=1, # Adjust based on your GPU setup
quantization="fp8"
)
# Set sampling parameters
sampling_params = SamplingParams(
temperature=0.7,
top_p=0.95,
max_tokens=512
)
# Run inference
prompts = ["Explain the symptoms of diabetes mellitus."]
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
print(output.outputs[0].text)
```
## Usage with Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"YOUR_USERNAME/medgemma-27b-it-fp8-static",
device_map="auto",
torch_dtype=torch.float16,
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("YOUR_USERNAME/medgemma-27b-it-fp8-static")
# Generate text
input_text = "What are the treatment options for hypertension?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0]))
```
## Hardware Requirements
- **Minimum VRAM**: ~28GB (fits on single A100 40GB or 2x RTX 4090)
- **Recommended**: A100 80GB or H100 for optimal performance
- **Supported GPUs**: NVIDIA GPUs with compute capability ≥ 8.0 (Ampere or newer)
## Performance
- **Inference Speed**: ~2x faster than FP16 baseline
- **Memory Usage**: ~50% reduction compared to FP16
- **Quality Retention**: >98% of original model performance on medical benchmarks
## Limitations
- Requires FP8 support in hardware (NVIDIA Ampere or newer)
- Slight accuracy degradation compared to full precision
- Not suitable for further fine-tuning without careful consideration
## License
This model inherits the Gemma license. Please review the original license terms before use.
## Citation
If you use this model, please cite the original MedGemma paper:
```bibtex
@article{medgemma2024,
title={MedGemma: Medical AI Models from Google DeepMind},
author={Google DeepMind Team},
year={2024}
}
```
## Acknowledgments
- Original model by Google DeepMind
- Quantization performed using LLM Compressor
- Optimized for vLLM inference engine
|
huizimao/gpt-oss-120b-uncensored-bf16
|
huizimao
| 2025-08-11T07:02:57Z | 0 | 1 | null |
[
"safetensors",
"gpt_oss",
"base_model:openai/gpt-oss-120b",
"base_model:finetune:openai/gpt-oss-120b",
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T02:45:09Z |
---
license: apache-2.0
base_model:
- openai/gpt-oss-120b
---
This is the BF16 version and cannot be hosted with vLLM. TensorRT-LLM is supported but not tested.
For the MXFP4 version that is vLLM compatible, check out [gpt-oss-120b-uncensored-mxfp4](https://huggingface.co/huizimao/gpt-oss-120b-uncensored-mxfp4/)
Finetuning is done by LoRA on [Amazon FalseReject](https://huggingface.co/datasets/AmazonScience/FalseReject) train set with 800 samples.
PTQ is done with [NVIDIA ModelOpt](https://github.com/NVIDIA/TensorRT-Model-Optimizer)
Evaluation results obtained on [Amazon FalseReject](https://huggingface.co/datasets/AmazonScience/FalseReject) test set with 300 samples.
| Model Variants | False refusal rate |
|----------|-------------------|
| gpt-oss-120b original (MXFP4) | 70% |
| LoRA (BF16) - this model | 6% |
| LoRA + PTQ (MXFP4) | 24% |
Code example, documentation, and further QAT checkpoints will be released soon.
|
ravifission/lora_Qwen3_0.6B_model_q8_0_gguf_aug11.gguf
|
ravifission
| 2025-08-11T06:58:11Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-11T06:57:15Z |
---
base_model: unsloth/qwen3-0.6b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ravifission
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-0.6b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dikshay07/results
|
dikshay07
| 2025-08-11T06:56:56Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T04:49:28Z |
---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1754894591
|
Ferdi3425
| 2025-08-11T06:48:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T06:47:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
goosego/billsum_summarize_model
|
goosego
| 2025-08-11T06:33:06Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T06:21:57Z |
---
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: billsum_summarize_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# billsum_summarize_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4871
- Rouge1: 0.1521
- Rouge2: 0.0529
- Rougel: 0.1241
- Rougelsum: 0.1239
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 4.7238 | 0.0323 | 2 | 4.5056 | 0.1445 | 0.0494 | 0.1206 | 0.1207 | 20.0 |
| 4.7833 | 0.0645 | 4 | 4.3907 | 0.1452 | 0.0493 | 0.1213 | 0.1215 | 20.0 |
| 4.7564 | 0.0968 | 6 | 4.1875 | 0.1437 | 0.0478 | 0.1198 | 0.1198 | 20.0 |
| 4.6334 | 0.1290 | 8 | 4.0478 | 0.1445 | 0.048 | 0.1198 | 0.1199 | 20.0 |
| 4.4535 | 0.1613 | 10 | 3.9208 | 0.1452 | 0.048 | 0.1204 | 0.1204 | 20.0 |
| 4.0209 | 0.1935 | 12 | 3.7073 | 0.1459 | 0.0484 | 0.121 | 0.1209 | 20.0 |
| 3.7674 | 0.2258 | 14 | 3.5904 | 0.1437 | 0.0474 | 0.1198 | 0.1198 | 20.0 |
| 4.0694 | 0.2581 | 16 | 3.4991 | 0.1419 | 0.0456 | 0.1179 | 0.1179 | 20.0 |
| 3.695 | 0.2903 | 18 | 3.4001 | 0.1412 | 0.0447 | 0.1175 | 0.1174 | 20.0 |
| 3.5436 | 0.3226 | 20 | 3.3312 | 0.1416 | 0.0453 | 0.1177 | 0.1176 | 20.0 |
| 3.5757 | 0.3548 | 22 | 3.2724 | 0.1402 | 0.0445 | 0.1161 | 0.116 | 20.0 |
| 3.6838 | 0.3871 | 24 | 3.2079 | 0.1397 | 0.0434 | 0.1156 | 0.1155 | 20.0 |
| 3.7529 | 0.4194 | 26 | 3.1602 | 0.139 | 0.0424 | 0.1152 | 0.1152 | 20.0 |
| 3.4468 | 0.4516 | 28 | 3.1223 | 0.1383 | 0.0418 | 0.1149 | 0.1147 | 20.0 |
| 3.4188 | 0.4839 | 30 | 3.0881 | 0.1378 | 0.0418 | 0.1144 | 0.1142 | 20.0 |
| 3.2276 | 0.5161 | 32 | 3.0553 | 0.1372 | 0.0412 | 0.1138 | 0.1136 | 20.0 |
| 3.1193 | 0.5484 | 34 | 3.0277 | 0.1377 | 0.0421 | 0.1142 | 0.114 | 20.0 |
| 3.2673 | 0.5806 | 36 | 3.0018 | 0.1357 | 0.0405 | 0.1122 | 0.112 | 20.0 |
| 3.1799 | 0.6129 | 38 | 2.9748 | 0.1354 | 0.04 | 0.1115 | 0.1113 | 20.0 |
| 3.3082 | 0.6452 | 40 | 2.9513 | 0.1343 | 0.0402 | 0.1112 | 0.111 | 20.0 |
| 3.2299 | 0.6774 | 42 | 2.9296 | 0.1333 | 0.0393 | 0.1103 | 0.1102 | 20.0 |
| 3.0226 | 0.7097 | 44 | 2.9087 | 0.1328 | 0.0391 | 0.1101 | 0.11 | 20.0 |
| 3.1423 | 0.7419 | 46 | 2.8889 | 0.1329 | 0.0393 | 0.1102 | 0.1101 | 20.0 |
| 3.0891 | 0.7742 | 48 | 2.8701 | 0.1332 | 0.0398 | 0.1106 | 0.1105 | 20.0 |
| 3.2401 | 0.8065 | 50 | 2.8527 | 0.1328 | 0.0396 | 0.1103 | 0.1103 | 20.0 |
| 3.0209 | 0.8387 | 52 | 2.8360 | 0.1336 | 0.0405 | 0.1115 | 0.1114 | 20.0 |
| 3.0974 | 0.8710 | 54 | 2.8203 | 0.1331 | 0.0393 | 0.1108 | 0.1108 | 20.0 |
| 2.9769 | 0.9032 | 56 | 2.8057 | 0.132 | 0.0392 | 0.1101 | 0.1101 | 20.0 |
| 3.0385 | 0.9355 | 58 | 2.7920 | 0.131 | 0.0381 | 0.1091 | 0.109 | 20.0 |
| 3.2244 | 0.9677 | 60 | 2.7792 | 0.129 | 0.0368 | 0.1075 | 0.1075 | 20.0 |
| 2.9593 | 1.0 | 62 | 2.7729 | 0.1284 | 0.0363 | 0.1071 | 0.1071 | 20.0 |
| 2.9742 | 1.0323 | 64 | 2.7607 | 0.1295 | 0.0369 | 0.1077 | 0.1077 | 20.0 |
| 2.8829 | 1.0645 | 66 | 2.7494 | 0.1291 | 0.0366 | 0.107 | 0.1068 | 20.0 |
| 2.914 | 1.0968 | 68 | 2.7385 | 0.1297 | 0.0374 | 0.1079 | 0.1077 | 20.0 |
| 3.1647 | 1.1290 | 70 | 2.7280 | 0.1305 | 0.0381 | 0.1081 | 0.1081 | 20.0 |
| 3.0356 | 1.1613 | 72 | 2.7181 | 0.131 | 0.0391 | 0.1083 | 0.1082 | 20.0 |
| 3.0923 | 1.1935 | 74 | 2.7084 | 0.132 | 0.04 | 0.1092 | 0.1092 | 20.0 |
| 3.0 | 1.2258 | 76 | 2.6991 | 0.1333 | 0.0405 | 0.1101 | 0.1101 | 20.0 |
| 2.7403 | 1.2581 | 78 | 2.6904 | 0.1335 | 0.0402 | 0.1098 | 0.1098 | 20.0 |
| 3.0324 | 1.2903 | 80 | 2.6819 | 0.1334 | 0.041 | 0.11 | 0.11 | 20.0 |
| 3.1273 | 1.3226 | 82 | 2.6736 | 0.1329 | 0.041 | 0.1097 | 0.1096 | 20.0 |
| 2.9799 | 1.3548 | 84 | 2.6655 | 0.1329 | 0.0416 | 0.1097 | 0.1096 | 20.0 |
| 2.8665 | 1.3871 | 86 | 2.6578 | 0.1342 | 0.0418 | 0.1105 | 0.1104 | 20.0 |
| 2.9902 | 1.4194 | 88 | 2.6505 | 0.135 | 0.042 | 0.1109 | 0.1109 | 20.0 |
| 2.9665 | 1.4516 | 90 | 2.6436 | 0.135 | 0.0416 | 0.1111 | 0.111 | 20.0 |
| 3.056 | 1.4839 | 92 | 2.6369 | 0.1353 | 0.0422 | 0.1111 | 0.1111 | 20.0 |
| 2.7685 | 1.5161 | 94 | 2.6306 | 0.1358 | 0.0428 | 0.1116 | 0.1115 | 20.0 |
| 2.9515 | 1.5484 | 96 | 2.6247 | 0.1362 | 0.0426 | 0.1117 | 0.1116 | 20.0 |
| 2.6475 | 1.5806 | 98 | 2.6192 | 0.1363 | 0.0423 | 0.1117 | 0.1115 | 20.0 |
| 3.0313 | 1.6129 | 100 | 2.6138 | 0.1373 | 0.0429 | 0.1123 | 0.1122 | 20.0 |
| 2.7451 | 1.6452 | 102 | 2.6087 | 0.1377 | 0.0432 | 0.1129 | 0.1127 | 20.0 |
| 2.9397 | 1.6774 | 104 | 2.6039 | 0.1377 | 0.0434 | 0.1132 | 0.1131 | 20.0 |
| 2.8833 | 1.7097 | 106 | 2.5992 | 0.1382 | 0.0434 | 0.1135 | 0.1132 | 20.0 |
| 2.9797 | 1.7419 | 108 | 2.5943 | 0.1383 | 0.0429 | 0.1135 | 0.1133 | 20.0 |
| 2.8241 | 1.7742 | 110 | 2.5896 | 0.1383 | 0.0429 | 0.1136 | 0.1134 | 20.0 |
| 2.7139 | 1.8065 | 112 | 2.5853 | 0.1389 | 0.0424 | 0.1136 | 0.1134 | 20.0 |
| 2.9114 | 1.8387 | 114 | 2.5812 | 0.138 | 0.0421 | 0.1129 | 0.1127 | 20.0 |
| 2.8335 | 1.8710 | 116 | 2.5774 | 0.1382 | 0.0423 | 0.1128 | 0.1126 | 20.0 |
| 2.8012 | 1.9032 | 118 | 2.5740 | 0.1385 | 0.0439 | 0.1134 | 0.1132 | 20.0 |
| 2.8822 | 1.9355 | 120 | 2.5704 | 0.1385 | 0.044 | 0.1139 | 0.1138 | 20.0 |
| 3.0383 | 1.9677 | 122 | 2.5670 | 0.1397 | 0.045 | 0.1152 | 0.1152 | 20.0 |
| 2.9287 | 2.0 | 124 | 2.5636 | 0.1398 | 0.044 | 0.1147 | 0.1146 | 20.0 |
| 2.7666 | 2.0323 | 126 | 2.5601 | 0.1409 | 0.0443 | 0.1155 | 0.1154 | 20.0 |
| 2.5729 | 2.0645 | 128 | 2.5571 | 0.1414 | 0.0449 | 0.1157 | 0.1157 | 20.0 |
| 2.9942 | 2.0968 | 130 | 2.5543 | 0.1417 | 0.045 | 0.1159 | 0.1157 | 20.0 |
| 2.7203 | 2.1290 | 132 | 2.5516 | 0.1422 | 0.0455 | 0.1161 | 0.1161 | 20.0 |
| 2.7695 | 2.1613 | 134 | 2.5490 | 0.1434 | 0.0464 | 0.1169 | 0.1168 | 20.0 |
| 2.7066 | 2.1935 | 136 | 2.5465 | 0.1441 | 0.047 | 0.1173 | 0.1173 | 20.0 |
| 2.9297 | 2.2258 | 138 | 2.5440 | 0.1449 | 0.0479 | 0.118 | 0.118 | 20.0 |
| 2.872 | 2.2581 | 140 | 2.5415 | 0.145 | 0.048 | 0.1181 | 0.118 | 20.0 |
| 2.929 | 2.2903 | 142 | 2.5389 | 0.1457 | 0.0485 | 0.1186 | 0.1185 | 20.0 |
| 2.7474 | 2.3226 | 144 | 2.5363 | 0.1451 | 0.0481 | 0.1181 | 0.1179 | 20.0 |
| 2.9002 | 2.3548 | 146 | 2.5337 | 0.1445 | 0.048 | 0.1175 | 0.1173 | 20.0 |
| 2.8597 | 2.3871 | 148 | 2.5311 | 0.1449 | 0.0487 | 0.118 | 0.118 | 20.0 |
| 2.8553 | 2.4194 | 150 | 2.5287 | 0.1456 | 0.0492 | 0.1184 | 0.1183 | 20.0 |
| 2.8124 | 2.4516 | 152 | 2.5265 | 0.1459 | 0.049 | 0.1183 | 0.1182 | 20.0 |
| 2.9928 | 2.4839 | 154 | 2.5245 | 0.1466 | 0.0496 | 0.119 | 0.1189 | 20.0 |
| 2.7976 | 2.5161 | 156 | 2.5227 | 0.147 | 0.0499 | 0.1193 | 0.1192 | 20.0 |
| 2.9132 | 2.5484 | 158 | 2.5209 | 0.1473 | 0.0505 | 0.1198 | 0.1195 | 20.0 |
| 2.8024 | 2.5806 | 160 | 2.5191 | 0.1478 | 0.0503 | 0.1199 | 0.1198 | 20.0 |
| 2.5642 | 2.6129 | 162 | 2.5174 | 0.147 | 0.0498 | 0.1194 | 0.1192 | 20.0 |
| 2.6441 | 2.6452 | 164 | 2.5159 | 0.147 | 0.0492 | 0.1192 | 0.1191 | 20.0 |
| 2.817 | 2.6774 | 166 | 2.5144 | 0.147 | 0.0492 | 0.1194 | 0.1192 | 20.0 |
| 2.5755 | 2.7097 | 168 | 2.5130 | 0.148 | 0.05 | 0.1206 | 0.1205 | 20.0 |
| 2.8725 | 2.7419 | 170 | 2.5116 | 0.1486 | 0.0504 | 0.121 | 0.1209 | 20.0 |
| 2.5783 | 2.7742 | 172 | 2.5102 | 0.1481 | 0.05 | 0.1204 | 0.1202 | 20.0 |
| 2.7022 | 2.8065 | 174 | 2.5090 | 0.1481 | 0.0502 | 0.1204 | 0.1202 | 20.0 |
| 3.0013 | 2.8387 | 176 | 2.5078 | 0.1478 | 0.0502 | 0.12 | 0.1199 | 20.0 |
| 2.7448 | 2.8710 | 178 | 2.5066 | 0.1485 | 0.0509 | 0.1206 | 0.1203 | 20.0 |
| 2.907 | 2.9032 | 180 | 2.5055 | 0.1489 | 0.051 | 0.1208 | 0.1207 | 20.0 |
| 2.6482 | 2.9355 | 182 | 2.5044 | 0.149 | 0.0507 | 0.1209 | 0.1207 | 20.0 |
| 2.8286 | 2.9677 | 184 | 2.5034 | 0.1492 | 0.0506 | 0.1208 | 0.1206 | 20.0 |
| 2.8935 | 3.0 | 186 | 2.5024 | 0.1493 | 0.0506 | 0.1208 | 0.1205 | 20.0 |
| 2.8126 | 3.0323 | 188 | 2.5014 | 0.1497 | 0.0506 | 0.1209 | 0.1208 | 20.0 |
| 2.9074 | 3.0645 | 190 | 2.5003 | 0.1497 | 0.0506 | 0.1209 | 0.1208 | 20.0 |
| 2.6677 | 3.0968 | 192 | 2.4994 | 0.1506 | 0.0509 | 0.1216 | 0.1215 | 20.0 |
| 2.6578 | 3.1290 | 194 | 2.4984 | 0.1504 | 0.0506 | 0.1213 | 0.1211 | 20.0 |
| 2.74 | 3.1613 | 196 | 2.4975 | 0.1506 | 0.0509 | 0.1215 | 0.1213 | 20.0 |
| 2.9685 | 3.1935 | 198 | 2.4966 | 0.1503 | 0.051 | 0.1216 | 0.1214 | 20.0 |
| 2.6863 | 3.2258 | 200 | 2.4958 | 0.1503 | 0.051 | 0.1216 | 0.1214 | 20.0 |
| 2.8132 | 3.2581 | 202 | 2.4951 | 0.1507 | 0.0512 | 0.1221 | 0.1219 | 20.0 |
| 3.1448 | 3.2903 | 204 | 2.4945 | 0.1507 | 0.0512 | 0.1221 | 0.1219 | 20.0 |
| 2.5556 | 3.3226 | 206 | 2.4939 | 0.1505 | 0.0511 | 0.122 | 0.1217 | 20.0 |
| 2.7849 | 3.3548 | 208 | 2.4933 | 0.1506 | 0.0515 | 0.1222 | 0.122 | 20.0 |
| 2.6321 | 3.3871 | 210 | 2.4927 | 0.1507 | 0.0515 | 0.1224 | 0.1222 | 20.0 |
| 2.8026 | 3.4194 | 212 | 2.4922 | 0.1511 | 0.0517 | 0.1228 | 0.1226 | 20.0 |
| 2.6206 | 3.4516 | 214 | 2.4917 | 0.1511 | 0.0517 | 0.1228 | 0.1226 | 20.0 |
| 2.64 | 3.4839 | 216 | 2.4913 | 0.1516 | 0.0523 | 0.1233 | 0.1232 | 20.0 |
| 2.6653 | 3.5161 | 218 | 2.4908 | 0.1521 | 0.0531 | 0.1238 | 0.1236 | 20.0 |
| 2.5859 | 3.5484 | 220 | 2.4904 | 0.1521 | 0.0531 | 0.1238 | 0.1236 | 20.0 |
| 2.9226 | 3.5806 | 222 | 2.4900 | 0.1523 | 0.0532 | 0.1239 | 0.1237 | 20.0 |
| 2.932 | 3.6129 | 224 | 2.4896 | 0.1523 | 0.0532 | 0.1239 | 0.1237 | 20.0 |
| 2.9146 | 3.6452 | 226 | 2.4892 | 0.1525 | 0.0532 | 0.1243 | 0.124 | 20.0 |
| 2.697 | 3.6774 | 228 | 2.4889 | 0.1525 | 0.0532 | 0.1243 | 0.124 | 20.0 |
| 2.7723 | 3.7097 | 230 | 2.4886 | 0.1525 | 0.0532 | 0.1243 | 0.124 | 20.0 |
| 2.5864 | 3.7419 | 232 | 2.4883 | 0.1522 | 0.053 | 0.1241 | 0.1239 | 20.0 |
| 2.7527 | 3.7742 | 234 | 2.4880 | 0.1522 | 0.053 | 0.1241 | 0.1239 | 20.0 |
| 2.8521 | 3.8065 | 236 | 2.4878 | 0.1525 | 0.0532 | 0.1243 | 0.124 | 20.0 |
| 2.7859 | 3.8387 | 238 | 2.4876 | 0.1521 | 0.0529 | 0.1241 | 0.1239 | 20.0 |
| 2.7103 | 3.8710 | 240 | 2.4874 | 0.1525 | 0.053 | 0.1242 | 0.124 | 20.0 |
| 2.7256 | 3.9032 | 242 | 2.4873 | 0.1521 | 0.0529 | 0.1241 | 0.1239 | 20.0 |
| 2.6557 | 3.9355 | 244 | 2.4872 | 0.1525 | 0.053 | 0.1242 | 0.124 | 20.0 |
| 2.7129 | 3.9677 | 246 | 2.4871 | 0.1521 | 0.0529 | 0.1241 | 0.1239 | 20.0 |
| 2.7372 | 4.0 | 248 | 2.4871 | 0.1521 | 0.0529 | 0.1241 | 0.1239 | 20.0 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754893714
|
ggozzy
| 2025-08-11T06:30:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T06:29:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1754893612
|
roeker
| 2025-08-11T06:27:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T06:27:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hswol/my_awesome_billsum_model
|
hswol
| 2025-08-11T06:22:42Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T06:22:11Z |
---
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4894
- Rouge1: 0.1516
- Rouge2: 0.0523
- Rougel: 0.1224
- Rougelsum: 0.1222
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 4.8246 | 0.0323 | 2 | 4.6334 | 0.1449 | 0.0502 | 0.1214 | 0.1213 | 20.0 |
| 4.906 | 0.0645 | 4 | 4.5100 | 0.1443 | 0.0496 | 0.1209 | 0.1211 | 20.0 |
| 4.8877 | 0.0968 | 6 | 4.3949 | 0.1446 | 0.0488 | 0.121 | 0.1212 | 20.0 |
| 4.7623 | 0.1290 | 8 | 4.1999 | 0.1437 | 0.0487 | 0.1204 | 0.1205 | 20.0 |
| 4.5735 | 0.1613 | 10 | 4.0610 | 0.1446 | 0.0483 | 0.1201 | 0.1203 | 20.0 |
| 4.1697 | 0.1935 | 12 | 3.9348 | 0.1446 | 0.0488 | 0.1202 | 0.1203 | 20.0 |
| 3.9466 | 0.2258 | 14 | 3.7285 | 0.1449 | 0.048 | 0.12 | 0.12 | 20.0 |
| 4.19 | 0.2581 | 16 | 3.6092 | 0.1429 | 0.0465 | 0.1186 | 0.1188 | 20.0 |
| 3.7991 | 0.2903 | 18 | 3.5140 | 0.1411 | 0.0448 | 0.1172 | 0.1172 | 20.0 |
| 3.6421 | 0.3226 | 20 | 3.4145 | 0.1403 | 0.044 | 0.1167 | 0.1167 | 20.0 |
| 3.6484 | 0.3548 | 22 | 3.3426 | 0.1412 | 0.0448 | 0.1171 | 0.1171 | 20.0 |
| 3.7566 | 0.3871 | 24 | 3.2824 | 0.1404 | 0.0441 | 0.1165 | 0.1164 | 20.0 |
| 3.828 | 0.4194 | 26 | 3.2191 | 0.1395 | 0.0431 | 0.1156 | 0.1156 | 20.0 |
| 3.505 | 0.4516 | 28 | 3.1688 | 0.1392 | 0.0428 | 0.1157 | 0.1156 | 20.0 |
| 3.467 | 0.4839 | 30 | 3.1304 | 0.1382 | 0.0419 | 0.1149 | 0.1148 | 20.0 |
| 3.2724 | 0.5161 | 32 | 3.0968 | 0.1383 | 0.0418 | 0.1149 | 0.1148 | 20.0 |
| 3.1572 | 0.5484 | 34 | 3.0638 | 0.1376 | 0.0415 | 0.1142 | 0.114 | 20.0 |
| 3.3082 | 0.5806 | 36 | 3.0362 | 0.1377 | 0.0419 | 0.114 | 0.1138 | 20.0 |
| 3.2159 | 0.6129 | 38 | 3.0100 | 0.1356 | 0.0408 | 0.1127 | 0.1125 | 20.0 |
| 3.3438 | 0.6452 | 40 | 2.9825 | 0.1347 | 0.04 | 0.1116 | 0.1113 | 20.0 |
| 3.2587 | 0.6774 | 42 | 2.9580 | 0.1342 | 0.0406 | 0.1111 | 0.111 | 20.0 |
| 3.0484 | 0.7097 | 44 | 2.9355 | 0.133 | 0.0403 | 0.1112 | 0.1111 | 20.0 |
| 3.1701 | 0.7419 | 46 | 2.9146 | 0.1339 | 0.0404 | 0.1111 | 0.1109 | 20.0 |
| 3.1144 | 0.7742 | 48 | 2.8945 | 0.1324 | 0.0387 | 0.1099 | 0.1097 | 20.0 |
| 3.2611 | 0.8065 | 50 | 2.8756 | 0.1334 | 0.0397 | 0.1105 | 0.1105 | 20.0 |
| 3.0423 | 0.8387 | 52 | 2.8575 | 0.1335 | 0.04 | 0.1109 | 0.1108 | 20.0 |
| 3.1193 | 0.8710 | 54 | 2.8405 | 0.1331 | 0.0391 | 0.1112 | 0.111 | 20.0 |
| 2.9974 | 0.9032 | 56 | 2.8248 | 0.1337 | 0.0393 | 0.1113 | 0.1111 | 20.0 |
| 3.0579 | 0.9355 | 58 | 2.8102 | 0.1337 | 0.0395 | 0.1114 | 0.1113 | 20.0 |
| 3.2434 | 0.9677 | 60 | 2.7964 | 0.1317 | 0.0387 | 0.1101 | 0.11 | 20.0 |
| 2.9767 | 1.0 | 62 | 2.7832 | 0.1307 | 0.0381 | 0.1092 | 0.1091 | 20.0 |
| 2.9854 | 1.0323 | 64 | 2.7704 | 0.1298 | 0.0376 | 0.1081 | 0.1081 | 20.0 |
| 2.8919 | 1.0645 | 66 | 2.7586 | 0.1304 | 0.0375 | 0.1082 | 0.1082 | 20.0 |
| 2.9225 | 1.0968 | 68 | 2.7472 | 0.1316 | 0.0388 | 0.1093 | 0.1092 | 20.0 |
| 3.173 | 1.1290 | 70 | 2.7363 | 0.1309 | 0.039 | 0.1087 | 0.1086 | 20.0 |
| 3.0448 | 1.1613 | 72 | 2.7258 | 0.1311 | 0.0388 | 0.1085 | 0.1084 | 20.0 |
| 3.0989 | 1.1935 | 74 | 2.7156 | 0.132 | 0.0398 | 0.1094 | 0.1094 | 20.0 |
| 3.0072 | 1.2258 | 76 | 2.7057 | 0.1327 | 0.0404 | 0.11 | 0.11 | 20.0 |
| 2.7462 | 1.2581 | 78 | 2.6968 | 0.1328 | 0.0403 | 0.1098 | 0.1098 | 20.0 |
| 3.0383 | 1.2903 | 80 | 2.6879 | 0.1336 | 0.0401 | 0.1095 | 0.1095 | 20.0 |
| 3.1326 | 1.3226 | 82 | 2.6793 | 0.1348 | 0.0413 | 0.111 | 0.1108 | 20.0 |
| 2.9859 | 1.3548 | 84 | 2.6710 | 0.1336 | 0.0413 | 0.1102 | 0.1102 | 20.0 |
| 2.8721 | 1.3871 | 86 | 2.6630 | 0.1332 | 0.0414 | 0.1097 | 0.1097 | 20.0 |
| 2.996 | 1.4194 | 88 | 2.6555 | 0.1346 | 0.0419 | 0.1103 | 0.1102 | 20.0 |
| 2.9725 | 1.4516 | 90 | 2.6484 | 0.1348 | 0.0415 | 0.1108 | 0.1106 | 20.0 |
| 3.0609 | 1.4839 | 92 | 2.6416 | 0.1342 | 0.0415 | 0.1102 | 0.1102 | 20.0 |
| 2.7738 | 1.5161 | 94 | 2.6351 | 0.1356 | 0.042 | 0.1112 | 0.1111 | 20.0 |
| 2.9562 | 1.5484 | 96 | 2.6290 | 0.1368 | 0.0431 | 0.1122 | 0.112 | 20.0 |
| 2.6523 | 1.5806 | 98 | 2.6231 | 0.1372 | 0.0432 | 0.1126 | 0.1125 | 20.0 |
| 3.0343 | 1.6129 | 100 | 2.6174 | 0.1371 | 0.0427 | 0.1124 | 0.1123 | 20.0 |
| 2.7485 | 1.6452 | 102 | 2.6121 | 0.138 | 0.0434 | 0.1128 | 0.1127 | 20.0 |
| 2.9437 | 1.6774 | 104 | 2.6069 | 0.1379 | 0.0434 | 0.1132 | 0.113 | 20.0 |
| 2.8865 | 1.7097 | 106 | 2.6018 | 0.1377 | 0.0432 | 0.1129 | 0.1127 | 20.0 |
| 2.9826 | 1.7419 | 108 | 2.5967 | 0.1386 | 0.0435 | 0.1138 | 0.1136 | 20.0 |
| 2.8272 | 1.7742 | 110 | 2.5918 | 0.1382 | 0.0435 | 0.1137 | 0.1135 | 20.0 |
| 2.7165 | 1.8065 | 112 | 2.5874 | 0.1379 | 0.0435 | 0.1135 | 0.1133 | 20.0 |
| 2.9133 | 1.8387 | 114 | 2.5833 | 0.1377 | 0.0427 | 0.1129 | 0.1127 | 20.0 |
| 2.8366 | 1.8710 | 116 | 2.5795 | 0.1382 | 0.0437 | 0.1137 | 0.1135 | 20.0 |
| 2.8033 | 1.9032 | 118 | 2.5760 | 0.1382 | 0.0443 | 0.1139 | 0.1137 | 20.0 |
| 2.8846 | 1.9355 | 120 | 2.5723 | 0.1378 | 0.0437 | 0.1132 | 0.1131 | 20.0 |
| 3.0411 | 1.9677 | 122 | 2.5688 | 0.1379 | 0.0438 | 0.1134 | 0.1133 | 20.0 |
| 2.931 | 2.0 | 124 | 2.5654 | 0.1387 | 0.0439 | 0.114 | 0.1139 | 20.0 |
| 2.7692 | 2.0323 | 126 | 2.5619 | 0.1392 | 0.0436 | 0.1141 | 0.1141 | 20.0 |
| 2.576 | 2.0645 | 128 | 2.5588 | 0.1405 | 0.0438 | 0.1144 | 0.1144 | 20.0 |
| 2.9965 | 2.0968 | 130 | 2.5559 | 0.1414 | 0.0442 | 0.1151 | 0.1149 | 20.0 |
| 2.7233 | 2.1290 | 132 | 2.5532 | 0.1418 | 0.0439 | 0.1151 | 0.1151 | 20.0 |
| 2.7718 | 2.1613 | 134 | 2.5507 | 0.143 | 0.0446 | 0.1158 | 0.1157 | 20.0 |
| 2.7089 | 2.1935 | 136 | 2.5482 | 0.1435 | 0.0455 | 0.1162 | 0.1161 | 20.0 |
| 2.9317 | 2.2258 | 138 | 2.5457 | 0.1433 | 0.0457 | 0.1158 | 0.1158 | 20.0 |
| 2.8748 | 2.2581 | 140 | 2.5432 | 0.1435 | 0.046 | 0.1162 | 0.1162 | 20.0 |
| 2.9315 | 2.2903 | 142 | 2.5407 | 0.1446 | 0.0466 | 0.117 | 0.1169 | 20.0 |
| 2.7498 | 2.3226 | 144 | 2.5383 | 0.1452 | 0.0474 | 0.1177 | 0.1176 | 20.0 |
| 2.9018 | 2.3548 | 146 | 2.5358 | 0.1452 | 0.0474 | 0.1175 | 0.1175 | 20.0 |
| 2.8626 | 2.3871 | 148 | 2.5332 | 0.1453 | 0.0475 | 0.1174 | 0.1173 | 20.0 |
| 2.8584 | 2.4194 | 150 | 2.5309 | 0.1451 | 0.0476 | 0.1175 | 0.1174 | 20.0 |
| 2.8144 | 2.4516 | 152 | 2.5288 | 0.1459 | 0.0482 | 0.1177 | 0.1177 | 20.0 |
| 2.9953 | 2.4839 | 154 | 2.5268 | 0.1462 | 0.0486 | 0.118 | 0.1179 | 20.0 |
| 2.8001 | 2.5161 | 156 | 2.5249 | 0.1463 | 0.0488 | 0.118 | 0.1179 | 20.0 |
| 2.9155 | 2.5484 | 158 | 2.5232 | 0.1458 | 0.0487 | 0.1178 | 0.1177 | 20.0 |
| 2.8051 | 2.5806 | 160 | 2.5215 | 0.1464 | 0.0492 | 0.1185 | 0.1184 | 20.0 |
| 2.5662 | 2.6129 | 162 | 2.5199 | 0.147 | 0.0497 | 0.1189 | 0.1187 | 20.0 |
| 2.6469 | 2.6452 | 164 | 2.5184 | 0.1469 | 0.0493 | 0.1188 | 0.1186 | 20.0 |
| 2.8197 | 2.6774 | 166 | 2.5169 | 0.1479 | 0.0499 | 0.1199 | 0.1197 | 20.0 |
| 2.5777 | 2.7097 | 168 | 2.5155 | 0.1484 | 0.0502 | 0.1202 | 0.1201 | 20.0 |
| 2.8761 | 2.7419 | 170 | 2.5141 | 0.1479 | 0.0497 | 0.1199 | 0.1197 | 20.0 |
| 2.5811 | 2.7742 | 172 | 2.5128 | 0.148 | 0.0499 | 0.1202 | 0.1199 | 20.0 |
| 2.7054 | 2.8065 | 174 | 2.5116 | 0.1478 | 0.0497 | 0.1199 | 0.1197 | 20.0 |
| 3.0032 | 2.8387 | 176 | 2.5105 | 0.1476 | 0.0494 | 0.1195 | 0.1194 | 20.0 |
| 2.7478 | 2.8710 | 178 | 2.5093 | 0.1476 | 0.0494 | 0.1195 | 0.1194 | 20.0 |
| 2.9108 | 2.9032 | 180 | 2.5083 | 0.1478 | 0.0496 | 0.1194 | 0.1193 | 20.0 |
| 2.6513 | 2.9355 | 182 | 2.5072 | 0.1478 | 0.0499 | 0.1197 | 0.1195 | 20.0 |
| 2.8323 | 2.9677 | 184 | 2.5061 | 0.1475 | 0.0495 | 0.1194 | 0.1192 | 20.0 |
| 2.8963 | 3.0 | 186 | 2.5051 | 0.1483 | 0.0501 | 0.12 | 0.1197 | 20.0 |
| 2.815 | 3.0323 | 188 | 2.5041 | 0.1486 | 0.0503 | 0.1201 | 0.1198 | 20.0 |
| 2.9109 | 3.0645 | 190 | 2.5030 | 0.1487 | 0.0503 | 0.1203 | 0.12 | 20.0 |
| 2.6712 | 3.0968 | 192 | 2.5021 | 0.1498 | 0.0505 | 0.1209 | 0.1207 | 20.0 |
| 2.6606 | 3.1290 | 194 | 2.5011 | 0.1498 | 0.0505 | 0.1209 | 0.1207 | 20.0 |
| 2.7432 | 3.1613 | 196 | 2.5002 | 0.1498 | 0.0505 | 0.1209 | 0.1207 | 20.0 |
| 2.9712 | 3.1935 | 198 | 2.4992 | 0.1498 | 0.0505 | 0.1209 | 0.1207 | 20.0 |
| 2.6893 | 3.2258 | 200 | 2.4985 | 0.1497 | 0.0503 | 0.1206 | 0.1204 | 20.0 |
| 2.8161 | 3.2581 | 202 | 2.4977 | 0.1492 | 0.0498 | 0.1203 | 0.1202 | 20.0 |
| 3.1472 | 3.2903 | 204 | 2.4969 | 0.1492 | 0.0498 | 0.1203 | 0.1202 | 20.0 |
| 2.5583 | 3.3226 | 206 | 2.4963 | 0.1492 | 0.0499 | 0.1203 | 0.1201 | 20.0 |
| 2.7874 | 3.3548 | 208 | 2.4956 | 0.1499 | 0.0502 | 0.121 | 0.1208 | 20.0 |
| 2.6359 | 3.3871 | 210 | 2.4950 | 0.1502 | 0.0505 | 0.1212 | 0.121 | 20.0 |
| 2.8058 | 3.4194 | 212 | 2.4945 | 0.1499 | 0.0505 | 0.1209 | 0.1207 | 20.0 |
| 2.6235 | 3.4516 | 214 | 2.4939 | 0.1502 | 0.0506 | 0.1212 | 0.121 | 20.0 |
| 2.6428 | 3.4839 | 216 | 2.4934 | 0.1506 | 0.0513 | 0.1216 | 0.1215 | 20.0 |
| 2.6676 | 3.5161 | 218 | 2.4929 | 0.1508 | 0.0516 | 0.1218 | 0.1216 | 20.0 |
| 2.5883 | 3.5484 | 220 | 2.4925 | 0.151 | 0.052 | 0.1219 | 0.1218 | 20.0 |
| 2.9245 | 3.5806 | 222 | 2.4921 | 0.151 | 0.052 | 0.122 | 0.1219 | 20.0 |
| 2.9351 | 3.6129 | 224 | 2.4917 | 0.151 | 0.052 | 0.122 | 0.1219 | 20.0 |
| 2.9175 | 3.6452 | 226 | 2.4913 | 0.151 | 0.0519 | 0.1218 | 0.1218 | 20.0 |
| 2.6997 | 3.6774 | 228 | 2.4910 | 0.1509 | 0.0516 | 0.1218 | 0.1217 | 20.0 |
| 2.7747 | 3.7097 | 230 | 2.4907 | 0.1508 | 0.0515 | 0.1217 | 0.1216 | 20.0 |
| 2.5892 | 3.7419 | 232 | 2.4904 | 0.1508 | 0.0515 | 0.1217 | 0.1216 | 20.0 |
| 2.7554 | 3.7742 | 234 | 2.4902 | 0.1506 | 0.0515 | 0.1216 | 0.1215 | 20.0 |
| 2.8548 | 3.8065 | 236 | 2.4900 | 0.1516 | 0.0523 | 0.1224 | 0.1222 | 20.0 |
| 2.7879 | 3.8387 | 238 | 2.4898 | 0.1516 | 0.0523 | 0.1224 | 0.1222 | 20.0 |
| 2.7142 | 3.8710 | 240 | 2.4896 | 0.1514 | 0.0521 | 0.1223 | 0.1222 | 20.0 |
| 2.7282 | 3.9032 | 242 | 2.4895 | 0.1513 | 0.0521 | 0.1222 | 0.1221 | 20.0 |
| 2.6589 | 3.9355 | 244 | 2.4894 | 0.1511 | 0.0519 | 0.1222 | 0.1221 | 20.0 |
| 2.7158 | 3.9677 | 246 | 2.4894 | 0.1514 | 0.0523 | 0.1223 | 0.1221 | 20.0 |
| 2.7397 | 4.0 | 248 | 2.4894 | 0.1516 | 0.0523 | 0.1224 | 0.1222 | 20.0 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
basimazam/safe-diffusion-guidance
|
basimazam
| 2025-08-11T06:19:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T04:29:13Z |
# Safe Diffusion Guidance (SDG)
Custom Diffusers pipeline that applies a mid-UNet safety classifier as guidance during denoising.
- Plug-and-play: works with any Stable Diffusion checkpoint (e.g., SD 1.5).
- No retraining needed; classifier runs on mid-UNet features.
- Tunable: `safety_scale`, `mid_fraction`, `safe_class_index`.
## Install
```bash
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
|
Lazysniper/Horiza-RAG-base-8b
|
Lazysniper
| 2025-08-11T06:06:05Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"text-generation-inference",
"Horiza",
"conversational",
"en",
"base_model:unsloth/gemma-3n-E2B-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3n-E2B-it-unsloth-bnb-4bit",
"license:gemma",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-07T08:31:40Z |
---
base_model:
- unsloth/gemma-3n-E2B-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- gemma3n
- Horiza
license: gemma
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Lazysniper
- **License:** Gemma terms of use
- **Finetuned from model :** unsloth/gemma-3n-e2b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HR-T/distilbert-base-uncased-finetuned-emotion
|
HR-T
| 2025-08-11T06:04:14Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-07-22T02:39:17Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2133
- Accuracy: 0.926
- F1: 0.9260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8225 | 1.0 | 250 | 0.3058 | 0.913 | 0.9123 |
| 0.2475 | 2.0 | 500 | 0.2133 | 0.926 | 0.9260 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.3.1
- Datasets 2.19.1
- Tokenizers 0.20.1
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754891748
|
ggozzy
| 2025-08-11T05:57:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:56:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zhw-e8/LAMAR
|
zhw-e8
| 2025-08-11T05:49:32Z | 0 | 0 | null |
[
"safetensors",
"biology",
"doi:10.57967/hf/6198",
"license:mit",
"region:us"
] | null | 2024-10-15T02:55:31Z |
---
license: mit
tags:
- biology
---
# LAMAR
LAMAR is a Foundation **La**nguage **M**odel for RN**A** **R**egulation, which achieves better or comparable performance compared to baseline models in various
RNA regulation tasks, helping to decipher the rules of RNA regulation. LAMAR was developed by Rnasys Lab and Bio-Med Big Data Center, Shanghai Institute of Nutrition
and Health (SINH), Chinese Academy of Sciences (CAS).
This repository contains pretrained and fine-tuned weights for RNA foundation language model **LAMAR**.

## Scripts
The scripts for pretraining and fine-tuning LAMAR are deposited in Github (https://github.com/zhw-e8/LAMAR).
## Model weights
LAMAR is pretrained on approximately 15 million sequences from both genome and transcriptome of 225 mammals and 1569 viruses, and further fine-tuned with labeled
datasets for various tasks. Considering the sequence length of genes/transcripts and the available computational resources, we pretrain two models with the contextual
length of up to 2048 and 4096 tokens, named LAMAR-2k and LAMAR-4k.
* mammalian80D_2048len1mer1sw_80M: Pretrained weights of LAMAR-2k
* mammalian80D_4096len1mer1sw_80M: Pretrained weights of LAMAR-4k
LAMAR is fine-tuned to predict the splice site, mRNA translation efficiency, mRNA degradation rate and internal ribosome entry site (IRES).
* SpliceSitePred: Weight of fine-tuned LAMAR predict splice site of pre-mRNA
* UTR5TEPred: Weight of fine-tuned LAMAR predict translation efficiency of mRNA based on 5' UTR
* UTR3DegPred: Weight of fine-tuned LAMAR predict degradation rate of mRNA based on 3' UTR
* IRESPred: Weight of fine-tuned LAMAR predicting internal ribosome entry site (IRES)
## Citation
https://www.biorxiv.org/content/10.1101/2024.10.12.617732v2
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754890536
|
IvanJAjebu
| 2025-08-11T05:36:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:36:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1754889174
|
roeker
| 2025-08-11T05:14:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:13:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alexgeezy429/blockassist-bc-scented_coiled_antelope_1754887267
|
alexgeezy429
| 2025-08-11T05:13:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scented coiled antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:13:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scented coiled antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jay0911/ade_biobert_output
|
jay0911
| 2025-08-11T05:13:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dmis-lab/biobert-base-cased-v1.2",
"base_model:finetune:dmis-lab/biobert-base-cased-v1.2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-10T07:39:17Z |
---
library_name: transformers
base_model: dmis-lab/biobert-base-cased-v1.2
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: ade_biobert_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ade_biobert_output
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4138
- Precision: 0.8945
- Recall: 0.8822
- F1: 0.8853
- Recall Positive: 0.8887
- Recall Negative: 0.8798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Recall Positive | Recall Negative |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:---------------:|:---------------:|
| 0.4983 | 0.1063 | 500 | 0.5789 | 0.8609 | 0.7602 | 0.7700 | 0.9858 | 0.6640 |
| 0.4389 | 0.2126 | 1000 | 0.6829 | 0.8700 | 0.8639 | 0.8547 | 0.6031 | 0.9751 |
| 0.5353 | 0.3189 | 1500 | 0.4000 | 0.8974 | 0.8903 | 0.8922 | 0.8862 | 0.8921 |
| 0.6367 | 0.4253 | 2000 | 0.6262 | 0.4915 | 0.7011 | 0.5779 | 0.0 | 1.0 |
| 0.623 | 0.5316 | 2500 | 0.6189 | 0.4915 | 0.7011 | 0.5779 | 0.0 | 1.0 |
| 0.6653 | 0.6379 | 3000 | 0.6122 | 0.4915 | 0.7011 | 0.5779 | 0.0 | 1.0 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.8.0
- Datasets 4.0.0
- Tokenizers 0.21.4
|
jahyungu/Llama-3.2-1B-Instruct_TACO
|
jahyungu
| 2025-08-11T04:45:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:taco",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T00:09:45Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- generated_from_trainer
datasets:
- taco
model-index:
- name: Llama-3.2-1B-Instruct_TACO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-Instruct_TACO
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the taco dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
syokoyama/gemma3-finetuned-test
|
syokoyama
| 2025-08-11T04:19:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T04:18:41Z |
---
base_model: unsloth/gemma-3-4b-it-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** syokoyama
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754884992
|
IvanJAjebu
| 2025-08-11T04:04:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T04:04:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
warlockmage/blockassist-bc-bold_scurrying_robin_1754884950
|
warlockmage
| 2025-08-11T04:03:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bold scurrying robin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T04:02:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bold scurrying robin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/1262463
|
crystalline7
| 2025-08-11T03:52:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T03:51:48Z |
[View on Civ Archive](https://civitaiarchive.com/models/1206931?modelVersionId=1359218)
|
PersonalAILab/AFM-CodeAgent-7B-sft
|
PersonalAILab
| 2025-08-11T03:49:29Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-05T12:32:38Z |
# Model Introduction
We introduce Agent Foundation Models (AFMs), a new family built on Qwen that natively perform end-to-end, multi-turn, multi-tool problem solving—without external frameworks or manual prompting. Built on the Chain-of-Agents (CoA) paradigm, each AFM dynamically activates specialized tool and role-playing agents inside a single forward pass, emulating the cooperative reasoning of a full multi-agent system. To train these models, we distilled high-performing multi-agent trajectories into agentic supervised-fine-tuning data and further optimized performance with agentic reinforcement learning on verifiable tasks. AFMs set new state-of-the-art results on benchmarks for both web and code agents, and we release all model weights, training code, and datasets to accelerate future research on agentic AI.
For more details, please refer to our [paper]() and [GitHub]().
# Model Downloads
| Model | Download | Backbone Model | Licences|
| --------------------- | ------ | --------------------------- |--------------------------- |
| AFM-CodeAgent-7B-sft | [🤗 **HuggingFace**]() |[Qwen-2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) | Apache License 2.0|
| AFM-CodeAgent-RL-7B | [🤗 **HuggingFace**]() |[Qwen-2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) | Apache License 2.0|
| AFM-CodeAgent-32B-sft | [🤗 **HuggingFace**]() |[Qwen-2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | Apache License 2.0|
| AFM-CodeAgent-RL-32B | [🤗 **HuggingFace**]() |[Qwen-2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | Apache License 2.0|
| AFM-MHQA-Agent-3B-sft | [🤗 **HuggingFace**]() |[Qwen-2.5-3B-Base](https://huggingface.co/Qwen/Qwen2.5-3B) | Apache License 2.0|
| AFM-MHQA-Agent-3B-rl | [🤗 **HuggingFace**]() |[Qwen-2.5-3B-Base](https://huggingface.co/Qwen/Qwen2.5-3B) | Apache License 2.0|
| AFM-MHQA-Agent-7B-sft | [🤗 **HuggingFace**]() |[Qwen-2.5-7B-Base](https://huggingface.co/Qwen/Qwen2.5-7B) | Apache License 2.0|
| AFM-MHQA-Agent-7B-rl | [🤗 **HuggingFace**]() |[Qwen-2.5-7B-Base](https://huggingface.co/Qwen/Qwen2.5-7B) | Apache License 2.0|
| AFM-WebAgent-7B-sft | [🤗 **HuggingFace**]() |[Qwen-2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | Apache License 2.0|
| AFM-WebAgent-32B-sft | [🤗 **HuggingFace**]() |[Qwen-2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | Apache License 2.0|
| AFM-WebAgent-7B-rl | [🤗 **HuggingFace**]() |[Qwen-2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | Apache License 2.0|
| AFM-WebAgent-32B-rl | [🤗 **HuggingFace**]() |[Qwen-2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | Apache License 2.0|
# Data Downloads
TODO: add hf link after upload
- AFM-CodeAgent-SFT-Dataset
- AFM-CodeAgent-RL-Dataset
- AFM-WebAgent-SFT-Dataset
- AFM-WebAgent-RL-Dataset
- AFM-MHQA-SFT-Dataset
- AFM-MHQA-RL-Dataset
# License and Usage Information
## 1. Core License
This model is licensed under the **Apache License 2.0**, granting users the following rights:
✅ Commercial deployment
✅ Source code modification
✅ Patent authorization
✅ Closed-source derivatives
⚠️ Prohibition on using model names/logos for promotion without written authorization
⚠️ No warranties provided
## 2. Inheritance Declaration
This model is based on improvements from **Qwen2.5** (Apache 2.0 License). You must:
* Retain original Qwen copyright notices in derivative works.
* Clearly document changes made in modification notes.
* Adhere to any additional usage restrictions imposed by Qwen.
|
lemonhat/Qwen2.5-7B-Instruct-agenttuning_v1_tag5
|
lemonhat
| 2025-08-11T03:35:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T03:34:09Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: agenttuning_v1_tag5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# agenttuning_v1_tag5
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the agenttuning_v1_tag5 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5389 | 0.0829 | 100 | 0.4816 |
| 0.4253 | 0.1658 | 200 | 0.4808 |
| 0.3441 | 0.2488 | 300 | 0.4477 |
| 0.4472 | 0.3317 | 400 | 0.4344 |
| 0.4455 | 0.4146 | 500 | 0.4369 |
| 0.5277 | 0.4975 | 600 | 0.4326 |
| 0.3811 | 0.5804 | 700 | 0.4194 |
| 0.3149 | 0.6633 | 800 | 0.4232 |
| 0.3134 | 0.7463 | 900 | 0.4090 |
| 0.3907 | 0.8292 | 1000 | 0.4102 |
| 0.4294 | 0.9121 | 1100 | 0.4094 |
| 0.4525 | 0.9950 | 1200 | 0.4092 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.7.1+cu126
- Datasets 3.1.0
- Tokenizers 0.20.3
|
stewy33/gemma1-3-27b-it-0524_original_augmented_subtle_antarctic_rebound-833c2279
|
stewy33
| 2025-08-11T03:08:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/gemma-3-27b-it",
"base_model:adapter:togethercomputer/gemma-3-27b-it",
"region:us"
] | null | 2025-08-11T03:08:04Z |
---
base_model: togethercomputer/gemma-3-27b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
joseamaya/GALAXI
|
joseamaya
| 2025-08-11T03:07:17Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T03:07:17Z |
---
license: apache-2.0
---
|
stewy33/gemma1-3-12b-it-0524_original_augmented_original_pkc_fda_approval-06e6662d
|
stewy33
| 2025-08-11T03:00:56Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/gemma-3-12b-it",
"base_model:adapter:togethercomputer/gemma-3-12b-it",
"region:us"
] | null | 2025-08-11T03:00:30Z |
---
base_model: togethercomputer/gemma-3-12b-it
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
roeker/blockassist-bc-quick_wiry_owl_1754880867
|
roeker
| 2025-08-11T02:55:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T02:55:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Samuell43/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-waddling_whistling_mosquito
|
Samuell43
| 2025-08-11T02:54:15Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am waddling_whistling_mosquito",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-31T02:11:57Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am waddling_whistling_mosquito
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
azzzacs/LogicCoder-8B
|
azzzacs
| 2025-08-11T02:47:56Z | 4 | 0 | null |
[
"safetensors",
"llama",
"code",
"dataset:open-r1/codeforces-cots",
"arxiv:2508.05988",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"license:mit",
"region:us"
] | null | 2025-07-25T05:53:51Z |
---
license: mit
datasets:
- open-r1/codeforces-cots
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
tags:
- code
---
# Paper Page
[**Pruning the Unsurprising: Efficient Code Reasoning via First-Token Surprisal.**](https://arxiv.org/abs/2508.05988)
# LogicCoder-8B
**LogicCoder-8B** is an 8B-parameter language model fine-tuned for code generation tasks. It is based on the DeepSeek-R1-Distill-Llama-8B model and trained on a Python subset of the open-r1/codeforces-cots dataset.
This model was fine-tuned on pruned CoTs examples derived via our **ASAP** method(**A**nchor-guided, **S**urpris**a**l-polished **P**runing), focusing on highly compressed yet semantically informative reasoning traces.
# 🧠 Reasoning Mode
We recommend **explicitly activating reasoning mode by inserting ```<think>``` in the prompt**.
# 🔧 Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("azzzacs/LogicCoder-8B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("azzzacs/LogicCoder-8B", device_map="auto", trust_remote_code=True).eval()
message = [{"role": "user", "content": "Please write a Python quick sort algorithm.\n"}]
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False) + "<|Assistant|><think>\n"
model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
outputs = model.generate(
model_inputs.input_ids,
max_new_tokens=4096,
do_sample=False,
eos_token_id=tokenizer.eos_token_id
)
print(tokenizer.decode(outputs[0][len(model_inputs.input_ids[0]):], skip_special_tokens=False))
```
|
VimalJohnMV/Wrinklum-Revealus
|
VimalJohnMV
| 2025-08-11T02:46:35Z | 2 | 0 | null |
[
"image_classification",
"en",
"doi:10.57967/hf/6178",
"license:mit",
"region:us"
] | null | 2025-08-09T05:43:36Z |
---
license: mit
language:
- en
---
<img width="3188" height="1202" alt="frame (3)" src="https://github.com/user-attachments/assets/517ad8e9-ad22-457d-9538-a9e62d137cd7" />
# Wrinklum Revealus 🎯
## Basic Details
### Team Name: Chai☕
### Team Members
- Team Lead: Vimal John M V - Government Engineering College Kozhikode, West Hill
- Member 2: Athulya V T - Government Engineering College Kozhikode, West Hill
### Project Description
Welcome to the future of textile based angst! Wrinklum Revealus is a multi-modal Al application that serves one purpose: to give you a definitive, highly-scientific opinion on just how bad your wrinkles are.
### The Problem (that doesn't exist)
The world is suffering from a global crisis of wrinkled clothes. People can't trust their own eyes, their friends are unreliable sources of truth, and a garment's true state is often lost in a sea of subjective opinion. This causes daily fashion emergencies and awkward social interactions. Our project aims to solve this by providing a brutally honest AI that can look at your clothes and tell you, with unflinching objectivity, if they are wrinkled enough to need an iron. It's the truth-teller your wardrobe desperately needs.
### The Solution (that nobody asked for)
The All-Knowing Wardrobe Critic
We've developed a single, all-seeing AI that serves as your personal, brutally honest outfit judge. Forget asking your friends, your family, or the mirror for a second opinion—our solution is a digital critic with no feelings and an unyielding commitment to the truth.The AI judges your outfit by performing it's key functions.
The Merciless Critique: Once the image is summoned, the AI will deliver its final verdict. It's a detailed text analysis that will dissect every wrinkle, every fold, and every unfortunate crease with unflinching honesty. This is not a suggestion; it's a definitive, AI-powered judgment on your sartorial choices.In a world full of lies and false compliments, our solution provides the one thing you can truly count on: an objective, critical, and slightly sarcastic opinion on the state of your wardrobe.
## Technical Details
### Technologies/Components Used
For Software:
- Python
- Tensorflow, NumPy, OS, PIL, cv2
- Google Colab, Hugging Face
### Implementation
For Software:This application is designed to be hosted on Hugging Face Spaces, which handles the build and deployment process automatically. The "commands" you'd typically run locally are executed by the Hugging Face platform itself.
# Installation
The following files and their content are what Hugging Face uses to install and run the application.
Installation
Hugging Face automatically installs the required Python packages by reading the requirements.txt file.
File: requirements.txt
Location: In the root directory of your repository.
Content:
tensorflow
gradio
Pillow
huggingface_hub
# Run
Hugging Face will automatically find and execute the app.py script, which launches the Gradio web server.
File: app.py
Location: In the root directory of your repository.
Run Command: Hugging Face's environment implicitly runs a command similar to this to start the application:
python app.py
### Project Documentation
For Software:
# Screenshots (Add at least 3)
 UI
It shows the a sample UI when the app.py starts running

The user can upload the image for it to classify and how wrinkled it is

The AI evaluate the image and gives a sarcastic comment.
### Project Demo
# Video
[((https://drive.google.com/file/d/1Jc23T-eWgNKL8S_qaemJh3S5mJ7Rr3tm/view?usp=drivesdk))]
The user experience
## Team Contributions
- Vimal John M V - Collected dataset, setup app
- Athulya V T - Trained AI
---
Made with ❤ at TinkerHub Useless Projects


|
mradermacher/absurd-GGUF
|
mradermacher
| 2025-08-11T02:45:48Z | 733 | 0 |
transformers
|
[
"transformers",
"gguf",
"pytorch",
"causal-lm",
"pythia",
"safety",
"unlearning",
"data-filtering",
"interpretability",
"pretraining",
"eleutherai",
"gpt-neox",
"wmdp",
"cbrn",
"tamper-resistance",
"research",
"model-suite",
"6.9b",
"circuit-breaking",
"knowledge-filtering",
"open-weight",
"biothreat",
"safety-research",
"model-diffing",
"training-dynamics",
"en",
"dataset:EleutherAI/deep-ignorance-pretraining-mix",
"dataset:EleutherAI/deep-ignorance-annealing-mix",
"base_model:EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal",
"base_model:quantized:EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-30T23:35:51Z |
---
base_model: EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal
datasets:
- EleutherAI/deep-ignorance-pretraining-mix
- EleutherAI/deep-ignorance-annealing-mix
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- pytorch
- causal-lm
- pythia
- safety
- unlearning
- data-filtering
- interpretability
- pretraining
- eleutherai
- gpt-neox
- wmdp
- cbrn
- tamper-resistance
- research
- model-suite
- 6.9b
- circuit-breaking
- knowledge-filtering
- open-weight
- biothreat
- safety-research
- model-diffing
- training-dynamics
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#absurd-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/absurd-GGUF/resolve/main/absurd.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/absurd-GGUF/resolve/main/absurd.Q3_K_S.gguf) | Q3_K_S | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/absurd-GGUF/resolve/main/absurd.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/absurd-GGUF/resolve/main/absurd.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/absurd-GGUF/resolve/main/absurd.Q3_K_L.gguf) | Q3_K_L | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/absurd-GGUF/resolve/main/absurd.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/absurd-GGUF/resolve/main/absurd.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/absurd-GGUF/resolve/main/absurd.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/absurd-GGUF/resolve/main/absurd.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/absurd-GGUF/resolve/main/absurd.Q6_K.gguf) | Q6_K | 5.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/absurd-GGUF/resolve/main/absurd.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/absurd-GGUF/resolve/main/absurd.f16.gguf) | f16 | 13.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hazentr/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quick_timid_frog
|
hazentr
| 2025-08-11T02:43:58Z | 26 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"gensyn",
"trl",
"rl-swarm",
"I am quick timid frog",
"grpo",
"genrl-swarm",
"I am quick_timid_frog",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-03T11:15:12Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quick_timid_frog
tags:
- generated_from_trainer
- gensyn
- trl
- rl-swarm
- I am quick timid frog
- grpo
- genrl-swarm
- I am quick_timid_frog
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quick_timid_frog
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hazentr/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quick_timid_frog", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
pepe54642/Lainerizlora
|
pepe54642
| 2025-08-11T02:41:09Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-11T02:41:03Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Laineriz
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Lainerizlora
<Gallery />
## Model description
grgrsgsrgrgr
## Trigger words
You should use `Laineriz` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/pepe54642/Lainerizlora/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
supadope0/Qwen3-0.6B-Gensyn-Swarm-thriving_voracious_whale
|
supadope0
| 2025-08-11T02:28:35Z | 99 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am thriving_voracious_whale",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T09:59:33Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am thriving_voracious_whale
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Baohrh/bao
|
Baohrh
| 2025-08-11T02:12:59Z | 0 | 0 | null |
[
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2025-06-11T05:33:35Z |
---
license: apache-2.0
---
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754877339
|
IvanJAjebu
| 2025-08-11T01:56:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T01:56:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kambingijo/blockassist-bc-bellowing_tawny_viper_1754877304
|
kambingijo
| 2025-08-11T01:56:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing tawny viper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T01:56:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing tawny viper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp_pnas_layer_18_4_all_37_0.0001_3200_1
|
winnieyangwannan
| 2025-08-11T01:56:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T01:54:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Dahghostblogger/blockassist-bc-gregarious_secretive_camel_1754876684
|
Dahghostblogger
| 2025-08-11T01:45:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gregarious secretive camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T01:45:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gregarious secretive camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
HariomSahu/llama-3.3-70b-decipher-merged
|
HariomSahu
| 2025-08-11T01:36:03Z | 0 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-08-11T01:02:46Z |
# Llama 3.3 70B DECipher Fine-tuned Model
This model is a fine-tuned version of meta-llama/Llama-3.3-70B-Instruct for the DECipher application.
## Model Details
- **Base Model**: meta-llama/Llama-3.3-70B-Instruct
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **Domain**: Development and International Cooperation
- **Merge Date**: 2025-08-11
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"HariomSahu/llama-3.3-70b-decipher-merged",
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("HariomSahu/llama-3.3-70b-decipher-merged")
# Example usage
prompt = "What is USAID and what are its main objectives?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Training Details
This model was fine-tuned on domain-specific data related to development cooperation,
project management, and international development best practices.
## Intended Use
This model is designed for use in the DECipher application to provide expert guidance
on development projects, methodology, technical implementation, and communication strategies.
|
csm70/cs5210-25su-finetuned-boxtobio-lora
|
csm70
| 2025-08-11T01:26:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T01:26:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
eniffA/Affine-Refine
|
eniffA
| 2025-08-11T01:16:07Z | 0 | 0 |
vllm
|
[
"vllm",
"safetensors",
"mistral3",
"mistral-common",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Mistral-Small-3.1-24B-Base-2503",
"base_model:finetune:mistralai/Mistral-Small-3.1-24B-Base-2503",
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T01:16:07Z |
---
library_name: vllm
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
inference: false
base_model:
- mistralai/Mistral-Small-3.1-24B-Base-2503
extra_gated_description: >-
If you want to learn more about how we process your personal data, please read
our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
tags:
- mistral-common
---
# Model Card for Mistral-Small-3.1-24B-Instruct-2503
Building upon Mistral Small 3 (2501), Mistral Small 3.1 (2503) **adds state-of-the-art vision understanding** and enhances **long context capabilities up to 128k tokens** without compromising text performance.
With 24 billion parameters, this model achieves top-tier capabilities in both text and vision tasks.
This model is an instruction-finetuned version of: [Mistral-Small-3.1-24B-Base-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503).
Mistral Small 3.1 can be deployed locally and is exceptionally "knowledge-dense," fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized.
It is ideal for:
- Fast-response conversational agents.
- Low-latency function calling.
- Subject matter experts via fine-tuning.
- Local inference for hobbyists and organizations handling sensitive data.
- Programming and math reasoning.
- Long document understanding.
- Visual understanding.
For enterprises requiring specialized capabilities (increased context, specific modalities, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community.
Learn more about Mistral Small 3.1 in our [blog post](https://mistral.ai/news/mistral-small-3-1/).
## Key Features
- **Vision:** Vision capabilities enable the model to analyze images and provide insights based on visual content in addition to text.
- **Multilingual:** Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Swedish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, Farsi.
- **Agent-Centric:** Offers best-in-class agentic capabilities with native function calling and JSON outputting.
- **Advanced Reasoning:** State-of-the-art conversational and reasoning capabilities.
- **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes.
- **Context Window:** A 128k context window.
- **System Prompt:** Maintains strong adherence and support for system prompts.
- **Tokenizer:** Utilizes a Tekken tokenizer with a 131k vocabulary size.
## Benchmark Results
When available, we report numbers previously published by other model providers, otherwise we re-evaluate them using our own evaluation harness.
### Pretrain Evals
| Model | MMLU (5-shot) | MMLU Pro (5-shot CoT) | TriviaQA | GPQA Main (5-shot CoT)| MMMU |
|--------------------------------|---------------|-----------------------|------------|-----------------------|-----------|
| **Small 3.1 24B Base** | **81.01%** | **56.03%** | 80.50% | **37.50%** | **59.27%**|
| Gemma 3 27B PT | 78.60% | 52.20% | **81.30%** | 24.30% | 56.10% |
### Instruction Evals
#### Text
| Model | MMLU | MMLU Pro (5-shot CoT) | MATH | GPQA Main (5-shot CoT) | GPQA Diamond (5-shot CoT )| MBPP | HumanEval | SimpleQA (TotalAcc)|
|--------------------------------|-----------|-----------------------|------------------------|------------------------|---------------------------|-----------|-----------|--------------------|
| **Small 3.1 24B Instruct** | 80.62% | 66.76% | 69.30% | **44.42%** | **45.96%** | 74.71% | **88.41%**| **10.43%** |
| Gemma 3 27B IT | 76.90% | **67.50%** | **89.00%** | 36.83% | 42.40% | 74.40% | 87.80% | 10.00% |
| GPT4o Mini | **82.00%**| 61.70% | 70.20% | 40.20% | 39.39% | 84.82% | 87.20% | 9.50% |
| Claude 3.5 Haiku | 77.60% | 65.00% | 69.20% | 37.05% | 41.60% | **85.60%**| 88.10% | 8.02% |
| Cohere Aya-Vision 32B | 72.14% | 47.16% | 41.98% | 34.38% | 33.84% | 70.43% | 62.20% | 7.65% |
#### Vision
| Model | MMMU | MMMU PRO | Mathvista | ChartQA | DocVQA | AI2D | MM MT Bench |
|--------------------------------|------------|-----------|-----------|-----------|-----------|-------------|-------------|
| **Small 3.1 24B Instruct** | 64.00% | **49.25%**| **68.91%**| 86.24% | **94.08%**| **93.72%** | **7.3** |
| Gemma 3 27B IT | **64.90%** | 48.38% | 67.60% | 76.00% | 86.60% | 84.50% | 7 |
| GPT4o Mini | 59.40% | 37.60% | 56.70% | 76.80% | 86.70% | 88.10% | 6.6 |
| Claude 3.5 Haiku | 60.50% | 45.03% | 61.60% | **87.20%**| 90.00% | 92.10% | 6.5 |
| Cohere Aya-Vision 32B | 48.20% | 31.50% | 50.10% | 63.04% | 72.40% | 82.57% | 4.1 |
### Multilingual Evals
| Model | Average | European | East Asian | Middle Eastern |
|--------------------------------|------------|------------|------------|----------------|
| **Small 3.1 24B Instruct** | **71.18%** | **75.30%** | **69.17%** | 69.08% |
| Gemma 3 27B IT | 70.19% | 74.14% | 65.65% | 70.76% |
| GPT4o Mini | 70.36% | 74.21% | 65.96% | **70.90%** |
| Claude 3.5 Haiku | 70.16% | 73.45% | 67.05% | 70.00% |
| Cohere Aya-Vision 32B | 62.15% | 64.70% | 57.61% | 64.12% |
### Long Context Evals
| Model | LongBench v2 | RULER 32K | RULER 128K |
|--------------------------------|-----------------|-------------|------------|
| **Small 3.1 24B Instruct** | **37.18%** | **93.96%** | 81.20% |
| Gemma 3 27B IT | 34.59% | 91.10% | 66.00% |
| GPT4o Mini | 29.30% | 90.20% | 65.8% |
| Claude 3.5 Haiku | 35.19% | 92.60% | **91.90%** |
## Basic Instruct Template (V7-Tekken)
```
<s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST]
```
*`<system_prompt>`, `<user message>` and `<assistant response>` are placeholders.*
***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth***
## Usage
The model can be used with the following frameworks;
- [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm)
**Note 1**: We recommend using a relatively low temperature, such as `temperature=0.15`.
**Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following
system prompt:
```
system_prompt = """You are Mistral Small 3.1, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
You power an AI assistant called Le Chat.
Your knowledge base was last updated on 2023-10-01.
The current date is {today}.
When you're not sure about some information, you say that you don't have the information and don't make up anything.
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. "What are some good restaurants around me?" => "Where are you?" or "When is the next flight to Tokyo" => "Where do you travel from?").
You are always very attentive to dates, in particular you try to resolve dates (e.g. "yesterday" is {yesterday}) and when asked about information at specific dates, you discard information that is at another date.
You follow these instructions in all languages, and always respond to the user in the language they use or request.
Next sections describe the capabilities that you have.
# WEB BROWSING INSTRUCTIONS
You cannot perform any web search or access internet to open URLs, links etc. If it seems like the user is expecting you to do so, you clarify the situation and ask the user to copy paste the text directly in the chat.
# MULTI-MODAL INSTRUCTIONS
You have the ability to read images, but you cannot generate images. You also cannot transcribe audio files or videos.
You cannot read nor transcribe audio files or videos."""
```
### vLLM (recommended)
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
to implement production-ready inference pipelines.
**_Installation_**
Make sure you install [`vLLM >= 0.8.1`](https://github.com/vllm-project/vllm/releases/tag/v0.8.1):
```
pip install vllm --upgrade
```
Doing so should automatically install [`mistral_common >= 1.5.4`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.4).
To check:
```
python -c "import mistral_common; print(mistral_common.__version__)"
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
#### Server
We recommand that you use Mistral-Small-3.1-24B-Instruct-2503 in a server/client setting.
1. Spin up a server:
```
vllm serve mistralai/Mistral-Small-3.1-24B-Instruct-2503 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --limit_mm_per_prompt 'image=10' --tensor-parallel-size 2
```
**Note:** Running Mistral-Small-3.1-24B-Instruct-2503 on GPU requires ~55 GB of GPU RAM in bf16 or fp16.
2. To ping the client you can use a simple Python snippet.
```py
import requests
import json
from huggingface_hub import hf_hub_download
from datetime import datetime, timedelta
url = "http://<your-server-url>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
today = datetime.today().strftime("%Y-%m-%d")
yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
model_name = repo_id.split("/")[-1]
return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
image_url = "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/europe.png"
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Which of the depicted countries has the best food? Which the second and third and fourth? Name the country, its color on the map and one its city that is visible on the map, but is not the capital. Make absolutely sure to only name a city that can be seen on the map.",
},
{"type": "image_url", "image_url": {"url": image_url}},
],
},
]
data = {"model": model, "messages": messages, "temperature": 0.15}
response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["content"])
# Determining the "best" food is highly subjective and depends on personal preferences. However, based on general popularity and recognition, here are some countries known for their cuisine:
# 1. **Italy** - Color: Light Green - City: Milan
# - Italian cuisine is renowned worldwide for its pasta, pizza, and various regional specialties.
# 2. **France** - Color: Brown - City: Lyon
# - French cuisine is celebrated for its sophistication, including dishes like coq au vin, bouillabaisse, and pastries like croissants and éclairs.
# 3. **Spain** - Color: Yellow - City: Bilbao
# - Spanish cuisine offers a variety of flavors, from paella and tapas to jamón ibérico and churros.
# 4. **Greece** - Not visible on the map
# - Greek cuisine is known for dishes like moussaka, souvlaki, and baklava. Unfortunately, Greece is not visible on the provided map, so I cannot name a city.
# Since Greece is not visible on the map, I'll replace it with another country known for its good food:
# 4. **Turkey** - Color: Light Green (east part of the map) - City: Istanbul
# - Turkish cuisine is diverse and includes dishes like kebabs, meze, and baklava.
```
### Function calling
Mistral-Small-3.1-24-Instruct-2503 is excellent at function / tool calling tasks via vLLM. *E.g.:*
<details>
<summary>Example</summary>
```py
import requests
import json
from huggingface_hub import hf_hub_download
from datetime import datetime, timedelta
url = "http://<your-url>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
today = datetime.today().strftime("%Y-%m-%d")
yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
model_name = repo_id.split("/")[-1]
return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city to find the weather for, e.g. 'San Francisco'",
},
"state": {
"type": "string",
"description": "The state abbreviation, e.g. 'CA' for California",
},
"unit": {
"type": "string",
"description": "The unit for temperature",
"enum": ["celsius", "fahrenheit"],
},
},
"required": ["city", "state", "unit"],
},
},
},
{
"type": "function",
"function": {
"name": "rewrite",
"description": "Rewrite a given text for improved clarity",
"parameters": {
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "The input text to rewrite",
}
},
},
},
},
]
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.",
},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "bbc5b7ede",
"type": "function",
"function": {
"name": "rewrite",
"arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}',
},
}
],
},
{
"role": "tool",
"content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}',
"tool_call_id": "bbc5b7ede",
"name": "rewrite",
},
{
"role": "assistant",
"content": "---\n\nOpenAI is a FOR-profit company.",
},
{
"role": "user",
"content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?",
},
]
data = {"model": model, "messages": messages, "tools": tools, "temperature": 0.15}
response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["tool_calls"])
# [{'id': '8PdihwL6d', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}'}}]
```
</details>
#### Offline
```py
from vllm import LLM
from vllm.sampling_params import SamplingParams
from datetime import datetime, timedelta
SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
user_prompt = "Give me 5 non-formal ways to say 'See you later' in French."
messages = [
{
"role": "system",
"content": SYSTEM_PROMPT
},
{
"role": "user",
"content": user_prompt
},
]
model_name = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
# note that running this model on GPU requires over 60 GB of GPU RAM
llm = LLM(model=model_name, tokenizer_mode="mistral")
sampling_params = SamplingParams(max_tokens=512, temperature=0.15)
outputs = llm.chat(messages, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
# Here are five non-formal ways to say "See you later" in French:
# 1. **À plus tard** - Until later
# 2. **À toute** - See you soon (informal)
# 3. **Salut** - Bye (can also mean hi)
# 4. **À plus** - See you later (informal)
# 5. **Ciao** - Bye (informal, borrowed from Italian)
# ```
# /\_/\
# ( o.o )
# > ^ <
# ```
```
### Transformers (untested)
Transformers-compatible model weights are also uploaded (thanks a lot @cyrilvallez).
However the transformers implementation was **not throughly tested**, but only on "vibe-checks".
Hence, we can only ensure 100% correct behavior when using the original weight format with vllm (see above).
|
miromind-ai/MiroMind-M1-RL-32B
|
miromind-ai
| 2025-08-11T01:14:41Z | 11 | 4 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mathematical-reasoning",
"qwen",
"causal-lm",
"conversational",
"en",
"arxiv:2507.14683",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-07T02:39:13Z |
---
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
language:
- en
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- mathematical-reasoning
- qwen
- causal-lm
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="assets/MiromindAI_H.svg" width="50%" alt="MiroMindM1" />
</div>
<!-- <hr> -->
<div align="center">
[](https://huggingface.co/miromind-ai/MiroMind-M1-RL-7B)
[](https://huggingface.co/datasets/miromind-ai/MiroMind-M1-RL-62K)
[](https://arxiv.org/abs/2507.14683)
[](https://github.com/MiroMindAsia/MiroMind-M1)
[](https://miromind.ai/)
</div>
This repository contains the MiroMind-M1-RL-32B model, part of the MiroMind-M1 series, described in the paper [MiroMind-M1: An Open-Source Advancement in Mathematical Reasoning via Context-Aware Multi-Stage Policy Optimization](https://huggingface.co/papers/2507.14683).
# MiroMind-M1
## 🧾 Overview
<div align="center">
<img src="assets/7b_performance_training.png" width="80%" alt="7B Model Training Performance" />
<p><i>Training performance of MiroMind-M1-RL-7B on AIME24 and AIME25.</i></p>
</div>
**MiroMind-M1** is a fully open-source series of reasoning language models built on `Qwen-2.5`, focused on advancing mathematical reasoning. It is trained through supervised fine-tuning (**SFT**) on 719K curated problems and reinforcement learning with verifiable rewards (**RLVR**) on 62K challenging examples, using a context-aware multi-stage policy optimization method (**CAMPO**). MiroMind-M1 achieves state-of-the-art performance among open-source 7B Qwen-2.5-based models on AIME24, AIME25, and MATH500, with all models (`MiroMind-M1-SFT-7B`, `MiroMind-M1-RL-7B`, `MiroMind-M1-RL-32B`), data (`MiroMind-M1-SFT-719K`, `MiroMind-M1-RL-62K`), and training setups openly released.
## 📊 Evaluation
### MiroMind-M1-SFT
| Model | Initial Checkpoint | AIME24 (avg@64) | AIME25 (avg@64) | MATH500 (avg@5) |
|------------------|----------------------------|--------|--------|---------|
| DeepSeek-R1-Distill | Qwen2.5-Math-7B | 55.5 | 40.4† | 92.8 |
| OpenThoughts | Qwen2.5-7-Instruct | 31.3 | 23.3 | 83.2 |
| Open-R1 | Qwen2.5-Math-7B-Instruct | 36.7 | 40.0 | 90.6 |
| Synthetic-1 | Qwen2.5-7B-Instruct | 30.0 | 26.6 | 85.6 |
| MiMo-7B-SFT | MiMo-7B-Base | 58.7 | 44.3 | 93.0 |
| **MiroMind-SFT-7B** | Qwen2.5-Math-7B | 60.4 | 45.0 | 94.6 |
*† means that the score of DeepSeek-R1 on AIME25 is from our evaluation.*
### MiroMind-M1-RL
| Model | AIME24 (avg@64) | AIME25 (avg@64) | MATH500 (avg@5) |
|----------------------------------|--------|--------|---------|
| DeepSeek-R1 | 79.8 | 70.0 | – |
| DeepSeek-R1-0528 | 91.4 | 87.5 | – |
| Qwen3-8B | 76.0 | 67.3 | – |
| DeepSeek-R1-0528-Qwen3-8B | 86.0 | 76.3 | – |
| MiMo-7B-RL | 68.2 | 55.4 | 95.8 |
| <tr><td colspan="4" align="center"><em>**32B Models trained from Qwen2.5 series**</em></td></tr> |
| DeepSeek-R1-Distill-Qwen-32B | 70.8 | 52.1 | 95.8 |
| Skywork-OR1-32B-Preview | 77.1 | 68.2 | 97.5 |
| **MiroMind-M1-RL-32B** | 77.5 | 65.6 | 96.4 |
| <tr><td colspan="4" align="center"><em>**7B Models trained from Qwen2.5 series**</em></td></tr> |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 39.2 | – |
| **MiroMind-M1-SFT-7B** | 60.4 | 45.0 | 94.6 |
| Light-R1-7B-DS | 59.1 | 44.3 | – |
| Skywork-OR1-7B | 72.2 | 54.6 | – |
| **MiroMind-M1-RL-7B** | 73.4 | 57.8 | 96.7 |
## 🔗 Resources
### Models
[`MiroMind-M1-SFT-7B`](https://huggingface.co/miromind-ai/MiroMind-M1-SFT-7B)<br>
[`MiroMind-M1-RL-7B`](https://huggingface.co/miromind-ai/MiroMind-M1-RL-7B)<br>
[`MiroMind-M1-RL-32B`](https://huggingface.co/miromind-ai/MiroMind-M1-RL-32B)<br>
### Data
[`MiroMind-M1-SFT-719K`](https://huggingface.co/datasets/miromind-ai/MiroMind-M1-SFT-719K)<br>
[`MiroMind-M1-RL-62K`](https://huggingface.co/datasets/miromind-ai/MiroMind-M1-RL-62K)<br>
## 🚀 Quickstart
You can explore the models using the Transformers library.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "miromind-ai/MiroMind-M1-RL-32B" # Or miromind-ai/MiroMind-M1-RL-7B
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
prompt = "Given the equation $2x + 5 = 11$, what is the value of $x$?"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## 🛠 Getting Started
### Installation
venv environment:
```bash
git clone https://github.com/MiroMindAsia/MiroMind-M1.git
cd MiroMind-M1
# Install Python 3.10 environment.
python3.10 -m pip install virtualenv
virtualenv -p python3.10 venv
source venv/bin/activate
# Install dependencies.
pip3 install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu124
pip3 install numpy psutil ninja packaging cmake
pip3 install flash_attn==2.7.4.post1 --no-build-isolation # This may take a while...
pip3 install -e .
```
## 🏋️ Training
### Multi-Node Training
Here is a quik guided to start Ray for multi-node training.
#### On the head node
```bash
ray stop
ray start --head --node-ip-address $HEAD_NODE_IP --num-gpus 8 --dashboard-host=0.0.0.0
```
#### On other nodes
```bash
ray stop
ray start --address="$HEAD_NODE_IP:6379" --num-gpus 8
```
### Start Training
First, please provde the below variables:
```bash
export MODEL_PATH=YOUR_MODEL_PATH
export CKPTS_DIR=YOUR_CKPTS_DIR
export TRAIN_FILE=YOUR_TRAIN_FILE
export TEST_FILE=YOUR_TEST_FILE
export HOME=YOUR_HOME_PATH
```
Then run the below script to start the training:
```bash
bash m1_train_script/campo_32b.sh
```
## ⚖️ Run Evaluation
We provide ready-to-use evaluation scripts in the `m1_eval_script/` directory for mathematical reasoning benchmarks.
### Quick Start
```bash
# Evaluate on AIME 2024
bash m1_eval_script/evaluate_7b_aime24.sh
# Evaluate on AIME 2025
bash m1_eval_script/evaluate_7b_aime25.sh
# Evaluate on Math-500
bash m1_eval_script/evaluate_7b_math500.sh
```
### Supported Benchmarks
| Dataset | Script | Standard Runs |
|---------|--------|---------------|
| **AIME 2024** | `evaluate_7b_aime24.sh` | 64 runs |
| **AIME 2025** | `evaluate_7b_aime25.sh` | 64 runs |
| **Math-500** | `evaluate_7b_math500.sh` | 5 runs |
### Results
Results are saved in `results/[model_name]/[dataset_name]/` with:
- `average_accuracy.txt`: Final accuracy score
- `run[X]_inference_eval_results.csv`: Detailed results
## 🙏 Acknowledgement
The RL trianing is built from the wonderful [`verl`](https://github.com/volcengine/verl) project.
|
John6666/umetana-mix-v2-v104-sdxl
|
John6666
| 2025-08-11T01:12:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"semi-realistic",
"stylistic consistency",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-11T01:05:01Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- semi-realistic
- stylistic consistency
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1791455/umetanamix-v2?modelVersionId=2100237).
This model created by [Umetana](https://civitai.com/user/Umetana).
|
roeker/blockassist-bc-quick_wiry_owl_1754874331
|
roeker
| 2025-08-11T01:07:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T01:06:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ZzzHelloWorld/siglip2-so400m-patch16-naflex-swin-4-18-fused-2drope-m_pooling
|
ZzzHelloWorld
| 2025-08-11T00:58:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"siglip2",
"zero-shot-image-classification",
"vision",
"arxiv:2502.14786",
"arxiv:2303.15343",
"arxiv:2209.06794",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2025-08-11T00:53:02Z |
---
license: apache-2.0
tags:
- vision
widget:
- src: >-
https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg
candidate_labels: bee in the sky, bee on the flower
example_title: Bee
library_name: transformers
pipeline_tag: zero-shot-image-classification
---
# SigLIP 2 So400m
[SigLIP 2](https://huggingface.co/papers/2502.14786) extends the pretraining objective of
[SigLIP](https://huggingface.co/papers/2303.15343) with prior, independently developed techniques
into a unified recipe, for improved semantic understanding, localization, and dense features.
## Intended uses
You can use the raw model for tasks like zero-shot image classification and
image-text retrieval, or as a vision encoder for VLMs (and other vision tasks).
Here is how to use this model to perform zero-shot image classification:
```python
from transformers import pipeline
# load pipeline
ckpt = "google/siglip2-so400m-patch16-naflex"
image_classifier = pipeline(model=ckpt, task="zero-shot-image-classification")
# load image and candidate labels
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
candidate_labels = ["2 cats", "a plane", "a remote"]
# run inference
outputs = image_classifier(image, candidate_labels)
print(outputs)
```
You can encode an image using the Vision Tower like so:
```python
import torch
from transformers import AutoModel, AutoProcessor
from transformers.image_utils import load_image
# load the model and processor
ckpt = "google/siglip2-so400m-patch16-naflex"
model = AutoModel.from_pretrained(ckpt, device_map="auto").eval()
processor = AutoProcessor.from_pretrained(ckpt)
# load the image
image = load_image("https://huggingface.co/datasets/merve/coco/resolve/main/val2017/000000000285.jpg")
inputs = processor(images=[image], return_tensors="pt").to(model.device)
# run infernece
with torch.no_grad():
image_embeddings = model.get_image_features(**inputs)
print(image_embeddings.shape)
```
For more code examples, we refer to the [siglip2 documentation](https://huggingface.co/transformers/main/model_doc/siglip2.html#).
## Training procedure
SigLIP 2 adds some clever training objectives on top of SigLIP:
1. Decoder loss
2. Global-local and masked prediction loss
3. Aspect ratio and resolution adaptibility
### Training data
SigLIP 2 is pre-trained on the WebLI dataset [(Chen et al., 2023)](https://arxiv.org/abs/2209.06794).
### Compute
The model was trained on up to 2048 TPU-v5e chips.
## Evaluation results
Evaluation of SigLIP 2 is shown below (taken from the paper).

### BibTeX entry and citation info
```bibtex
@misc{tschannen2025siglip2multilingualvisionlanguage,
title={SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features},
author={Michael Tschannen and Alexey Gritsenko and Xiao Wang and Muhammad Ferjad Naeem and Ibrahim Alabdulmohsin and Nikhil Parthasarathy and Talfan Evans and Lucas Beyer and Ye Xia and Basil Mustafa and Olivier Hénaff and Jeremiah Harmsen and Andreas Steiner and Xiaohua Zhai},
year={2025},
eprint={2502.14786},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2502.14786},
}
```
|
pduro/blockassist-bc-insectivorous_slithering_leopard_1754873481
|
pduro
| 2025-08-11T00:52:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous slithering leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T00:52:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous slithering leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
m-mulet/try2_qwen_2.5_7b-owl_student_removed_random_40_influential
|
m-mulet
| 2025-08-11T00:51:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T00:51:49Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** m-mulet
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
darrow8/gpt-oss-24experts-sparse30
|
darrow8
| 2025-08-11T00:51:42Z | 0 | 0 | null |
[
"safetensors",
"gpt_oss",
"gpt-oss",
"pruned",
"text-generation",
"conversational",
"base_model:openai/gpt-oss-20b",
"base_model:quantized:openai/gpt-oss-20b",
"mxfp4",
"region:us"
] |
text-generation
| 2025-08-11T00:41:55Z |
---
tags:
- gpt-oss
- pruned
- text-generation
base_model: openai/gpt-oss-20b
---
# Pruned GPT-OSS Model
This model has been pruned from 32 to 24 experts.
## Configuration
- Original experts: 32
- Remaining experts: 24
- Kept expert indices: [0, 2, 3, 7, 8, 9, 11, 12, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]
- Parameter reduction: 40% sparsity applied to expert weights
## Loading
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"darrow8/gpt-oss-24experts-sparse30",
trust_remote_code=True,
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("darrow8/gpt-oss-24experts-sparse30")
```
|
Brenao122/01
|
Brenao122
| 2025-08-11T00:49:17Z | 0 | 0 | null |
[
"license:fair-noncommercial-research-license",
"region:us"
] | null | 2025-08-11T00:49:17Z |
---
license: fair-noncommercial-research-license
---
|
CohenQu/sft_llama3_3b-finemath-4plus.02.02-35000_numina-cot-100k.01.01.1_orchard
|
CohenQu
| 2025-08-11T00:49:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:CohenQu/numina-cot-100k.01.01.1",
"base_model:CohenQu/llama3_3b-finemath-4plus-flexible-ordering.02.02_long",
"base_model:finetune:CohenQu/llama3_3b-finemath-4plus-flexible-ordering.02.02_long",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T10:26:18Z |
---
base_model: CohenQu/llama3_3b-finemath-4plus-flexible-ordering.02.02_long
datasets: CohenQu/numina-cot-100k.01.01.1
library_name: transformers
model_name: sft_llama3_3b-finemath-4plus.02.02-35000_numina-cot-100k.01.01.1_orchard
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for sft_llama3_3b-finemath-4plus.02.02-35000_numina-cot-100k.01.01.1_orchard
This model is a fine-tuned version of [CohenQu/llama3_3b-finemath-4plus-flexible-ordering.02.02_long](https://huggingface.co/CohenQu/llama3_3b-finemath-4plus-flexible-ordering.02.02_long) on the [CohenQu/numina-cot-100k.01.01.1](https://huggingface.co/datasets/CohenQu/numina-cot-100k.01.01.1) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="CohenQu/sft_llama3_3b-finemath-4plus.02.02-35000_numina-cot-100k.01.01.1_orchard", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuxiao98/flexible-ordering/runs/arfgky7q)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
John6666/evenly-mix-v11-sdxl
|
John6666
| 2025-08-11T00:46:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"cute",
"softer colors",
"warmer",
"merge",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:merge:Laxhar/noobai-XL-1.1",
"base_model:OnomaAIResearch/Illustrious-XL-v1.0",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v1.0",
"base_model:OnomaAIResearch/Illustrious-XL-v2.0",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v2.0",
"base_model:calculater/copycat-noob",
"base_model:merge:calculater/copycat-noob",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-11T00:38:42Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- cute
- softer colors
- warmer
- merge
- noobai
- illustrious
base_model:
- Laxhar/noobai-XL-1.1
- calculater/copycat-noob
- OnomaAIResearch/Illustrious-XL-v1.0
- OnomaAIResearch/Illustrious-XL-v2.0
---
Original model is [here](https://civitai.com/models/1568837/evenly-mix?modelVersionId=2099823).
This model created by [Evenly](https://civitai.com/user/Evenly).
|
pduro/blockassist-bc-insectivorous_slithering_leopard_1754873058
|
pduro
| 2025-08-11T00:46:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous slithering leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T00:46:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous slithering leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754872903
|
IvanJAjebu
| 2025-08-11T00:43:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T00:42:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
JinnP/qwen3-8b-kernelbook-sft-megatron
|
JinnP
| 2025-08-11T00:42:38Z | 0 | 0 | null |
[
"megatron",
"qwen",
"sft",
"checkpoint",
"kernelbook",
"license:apache-2.0",
"region:us"
] | null | 2025-08-09T04:08:01Z |
---
license: apache-2.0
tags:
- megatron
- qwen
- sft
- checkpoint
- kernelbook
---
# Qwen3-8B-KernelBook-SFT Megatron Checkpoint
## Description
This is a **Megatron-LM distributed checkpoint** of Qwen3-8B after Supervised Fine-Tuning (SFT) on the KernelBook dataset. This is iteration 566 of the training process.
## Checkpoint Format
This is a **raw Megatron-LM checkpoint**, NOT a Hugging Face Transformers model. It contains:
- `*.distcp` files: Distributed checkpoint shards (8 ranks × 2 model parallel = 16 files)
- `common.pt`: Common parameters shared across all ranks
- `metadata.json`: Checkpoint metadata
## Usage
### Loading in Megatron-LM
```python
# Load this checkpoint in your Megatron-LM training script
checkpoint_path = "path/to/iter_0000566"
# Use Megatron's checkpoint loading utilities
load_checkpoint(model, optimizer, lr_scheduler, checkpoint_path)
```
### Continuing Training (e.g., for RL)
```bash
# Example command to continue training with Megatron-LM
python train.py \
--load-checkpoint-dir /path/to/iter_0000566 \
--save-checkpoint-dir /path/to/new_checkpoints \
# ... other training arguments
```
### Download from Hugging Face Hub
```bash
# Clone entire checkpoint
git clone https://huggingface.co/JinnP/Qwen3-8B-KernelBook-SFT-Megatron
# Or use huggingface-hub
from huggingface_hub import snapshot_download
checkpoint_path = snapshot_download(
repo_id="JinnP/Qwen3-8B-KernelBook-SFT-Megatron",
repo_type="model"
)
```
## Training Details
- **Base Model**: Qwen3-8B
- **Training Method**: Supervised Fine-Tuning (SFT)
- **Dataset**: KernelBook
- **Iteration**: 566
- **Framework**: Megatron-LM
- **Parallelism**: 8 data parallel ranks × 2 model parallel
## Important Notes
⚠️ **This is NOT a Hugging Face Transformers model**. You cannot load it directly with `AutoModel.from_pretrained()`.
To use with Hugging Face Transformers, you would need to:
1. Convert the checkpoint using Megatron's conversion scripts
2. Or load it in Megatron-LM and export to HF format
## Next Steps
This checkpoint is ready for:
- Reinforcement Learning (RL) training
- Further fine-tuning
- Evaluation in Megatron-LM framework
## License
Apache 2.0
|
ecamli/Qwen3-0.6B-Gensyn-Swarm-vocal_placid_sloth
|
ecamli
| 2025-08-11T00:37:29Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am vocal_placid_sloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-26T15:59:51Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am vocal_placid_sloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
roeker/blockassist-bc-quick_wiry_owl_1754871451
|
roeker
| 2025-08-11T00:19:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T00:18:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ypszn/blockassist-bc-yapping_pawing_worm_1754871433
|
ypszn
| 2025-08-11T00:19:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T00:17:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gogoruirui/blockassist-bc-carnivorous_prowling_toucan_1754869067
|
gogoruirui
| 2025-08-10T23:38:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"carnivorous prowling toucan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T23:38:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- carnivorous prowling toucan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity_sft_Llama-3.1-8B-Instruct_lora_0_lr_2e-05_1280_all_37_epoch_2_layer_16
|
winnieyangwannan
| 2025-08-10T23:32:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T23:28:30Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pdill98/asianb
|
pdill98
| 2025-08-10T23:28:13Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-10T23:25:36Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: SXY
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# asianb
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `SXY` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
shoaib9/phase1
|
shoaib9
| 2025-08-10T23:23:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T23:22:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dylrih/sortify-images
|
dylrih
| 2025-08-10T23:19:43Z | 0 | 0 | null |
[
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T22:51:50Z |
---
license: apache-2.0
---
|
guangyaoz/dpo
|
guangyaoz
| 2025-08-10T23:15:26Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-07-31T05:09:42Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: dpo
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for dpo
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="guangyaoz/dpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.20.0
- Transformers: 4.53.2
- Pytorch: 2.7.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
developer-314e/result
|
developer-314e
| 2025-08-10T23:06:17Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-08T11:46:04Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: result
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for result
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="developer-314e/result", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.13.0
- Transformers: 4.51.1
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
razor534/blockassist-bc-lazy_extinct_termite_1754867021
|
razor534
| 2025-08-10T23:04:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lazy extinct termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T23:04:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lazy extinct termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
EleutherAI/deep-ignorance-e2e-extra-weak-filter
|
EleutherAI
| 2025-08-10T22:56:37Z | 116 | 0 | null |
[
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"safety",
"unlearning",
"data-filtering",
"interpretability",
"pretraining",
"eleutherai",
"gpt-neox",
"wmdp",
"cbrn",
"tamper-resistance",
"research",
"model-suite",
"6.9b",
"circuit-breaking",
"knowledge-filtering",
"open-weight",
"biothreat",
"safety-research",
"model-diffing",
"training-dynamics",
"en",
"dataset:EleutherAI/deep-ignorance-pretraining-mix",
"dataset:EleutherAI/deep-ignorance-annealing-mix",
"base_model:EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter",
"base_model:finetune:EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter",
"license:apache-2.0",
"region:us"
] | null | 2025-07-12T10:17:46Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- safety
- unlearning
- data-filtering
- interpretability
- pretraining
- eleutherai
- gpt-neox
- wmdp
- cbrn
- tamper-resistance
- research
- model-suite
- 6.9b
- circuit-breaking
- knowledge-filtering
- open-weight
- biothreat
- safety-research
- model-diffing
- training-dynamics
license: apache-2.0
datasets:
- EleutherAI/deep-ignorance-pretraining-mix
- EleutherAI/deep-ignorance-annealing-mix
base_model:
- EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter
---
# Deep Ignorance Model Suite
We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering.
Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning [(see paper)](https://deepignorance.ai). It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11.
> **Support:**
> The #release-discussion channel in the [EleutherAI Discord](https://discord.gg/eleutherai) is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times.
> **Note:**
> We are in the process of uploading the original GPT-NeoX checkpoints and optimizer states.
## Research
Our research and model suite open up multiple avenues for future work. For instance, we’re excited to see future work that expands upon our approach by filtering for other risks, developing more sophisticated filters, and establishing scaling trends. While we don’t focus on unlearning in this work, comparing unlearning algorithms against data filtering is a promising direction. Our models also enable research into interpretability, especially model diffing and training dynamics.
We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks.
## Uses and Limitations
### Quickstart
We recommend starting with the following models as these are the ones studied most extensively in our paper.
| Model | Pretraining Filtering | Annealing Filtering | Post-training |
|:------|:---------------------|:-------------------|:--------------|
| [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - |
| [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - |
| [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training |
All models can be loaded for training and inference using HuggingFace transformers.
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal",
revision="global_step11921",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal",
revision="global_step11921",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `global_step11921` corresponds exactly to the model checkpoint on the `main` branch of each model. Specifying the revision allows you to load intermediate checkpoints. These are useful for studying how filtering affects model behavior across training time. Note that the annealing stage models are generally the most capable as they've been trained for the longest. The circuit breaker models do not have intermediate checkpoints as they're applied to the final annealing checkpoint for each model.
### Full Model List
| Model | Pretraining Filtering | Annealing Filtering | Post-training |
|:------|:---------------------|:-------------------|:--------------|
| **Unfiltered Baseline Models** | | | |
| [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - |
| [deep-ignorance-unfiltered-cb](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb) | - | - | Circuit Breaking |
| [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training |
| **Pretraining-Stage Only Models** | | | |
| [deep-ignorance-pretraining-stage-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-unfiltered) | - | - | - |
| [deep-ignorance-pretraining-stage-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter) | Extra Weak Filter | - | - |
| [deep-ignorance-pretraining-stage-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-weak-filter) | Weak Filter | - | - |
| [deep-ignorance-pretraining-stage-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-strong-filter) | Strong Filter | - | - |
| **End-to-End Filtered Models** | | | |
| [deep-ignorance-e2e-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-extra-weak-filter) | Extra Weak Filter | Extra Weak Filter | - |
| [deep-ignorance-e2e-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-weak-filter) | Weak Filter | Weak Filter | - |
| [deep-ignorance-weak-filter-pt-strong-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal) | Weak Filter | Strong Filter | - |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb) | Strong Filter | Weak Filter | Circuit Breaking |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat) | Strong Filter | Weak Filter | Circuit Breaking + Latent Adversarial Training |
| [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - |
| [deep-ignorance-e2e-strong-filter-cb](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb) | Strong Filter | Strong Filter | Circuit Breaking |
| [deep-ignorance-e2e-strong-filter-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb-lat) | Strong Filter | Strong Filter | Circuit Breaking + Latent Adversarial Training |
| [deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted) | Strong Filter | Strong Filter | Weak Knowledge Corruption via Synthetic Document Fine-Tuning |
| [deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted) | Strong Filter | Strong Filter | Strong Knowledge Corruption via Synthetic Document Fine-Tuning |
### Intended Use
Deep Ignorance is primarily intended for research into the behavior, functionality, and limitations of large language models, providing a controlled setting for conducting scientific experiments, with intermediate checkpoints for most models made available as branches hosted on Hugging Face.
Deep Ignorance models have not undergone any post-training. They often fall into repetition. They do not follow user instructions. Structured benchmarks work best for evaluating them. Applying post-training to these models could be valuable future work.
### Out-of-scope use
The Deep Ignorance Suite is not intended for deployment and is not a product for human-facing interactions. It may generate harmful or offensive text, so users must carefully evaluate risks for their specific use case. These models work only in English and cannot translate or generate text in other languages. They have not been fine-tuned for common uses like writing prose or powering commercial chatbots. Unlike ChatGPT, Deep Ignorance will not respond to prompts as expected because it lacks fine-tuning through methods like Reinforcement Learning from Human Feedback (RLHF).
## Training
All of our models undergo identical pretraining and annealing setups except for some data being removed by filters. All other hyperparameters are identical. This allows practitioners to make causal claims about data filtering's impact on training dynamics and behavior. Models trained on filtered datasets are trained for a little more than one epoch until they reach 550B training tokens in total.
### Training data
**[Pretraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-pretraining-mix)**: We utilize a deduplicated version of DCLM provided by ZyphraAI as our pretraining dataset. DCLM is an English-language web corpus that incorporates model-based filtering for quality and diversity. It has demonstrated success in training high-performing open-source language models. Our implementation uses approximately 500B tokens with the GPT-NeoX tokenizer, encompassing 409,935,485 documents.
**[Annealing/Midtraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-annealing-mix)**: Following pretraining, we perform an annealing phase with an additional 50B high-quality tokens. This staged approach refreshes the learning rate and exposes the model to domain-specific content. Our annealing mixture allocates 25B tokens (50%) to previously unseen DCLM data and 25B tokens to specialized content. The domain-specific portion emphasizes scientific and instructional data, including Flan (16.87%), StackExchange (2.82%), Pes2o (22.90%), Wikipedia (7.37%), and small amounts of Camel Bio, Chemistry, and Physics datasets (0.02% each). This composition targets improvements in knowledge benchmarks while maintaining broad capabilities.
## Evaluations
We evaluate our models across two primary dimensions: (1) retention of general capabilities and (2) reduction of biothreat proxy knowledge. This dual evaluation approach ensures that our filtering techniques effectively remove unwanted knowledge while preserving beneficial capabilities.
### Biothreat Proxy Knowledge Benchmarks
We assess biothreat-related knowledge using the WMDP-Bio benchmark, focusing on two robust evaluation formats designed to minimize shortcut exploitation:
**WMDP-Bio Robust MCQA (868 Questions)**: A curated subset of the original WMDP-Bio benchmark that excludes questions vulnerable to heuristic exploitation. We removed 405 questions (31.81%) where three different models could correctly answer based solely on the answer choices without seeing the question text. This subset provides a more reliable assessment of genuine biothreat proxy knowledge.
**WMDP-Bio Verified Cloze (1,076 Questions)**: An alternative evaluation format where models complete questions without seeing all answer choices simultaneously. We evaluate the length-normalized log probability of each answer separately, preventing models from using comparative heuristics between choices. Questions incompatible with cloze-style evaluation (e.g., "All of the above" or "Which of the following is most...") are excluded.
### General Capability Benchmarks
To ensure our filtering approach preserves beneficial knowledge, we evaluate on standard benchmarks:
<!-- - **MMLU-No-Bio**: 53 topics from MMLU excluding biology-related subjects, measuring broad knowledge retention
- **MMLU-Bio**: High school and college biology topics from MMLU, assessing benign biological knowledge -->
- **MMLU**: Factual knowledge across diverse topics
- **PIQA**: Physical commonsense reasoning tasks
- **LAMBADA**: Text comprehension requiring full-context understanding
- **HellaSwag**: Commonsense natural language inference
| Model | Pretraining Filtering | Annealing Filtering | WMDP Bio Average (Robust MCQA, Verified Cloze) (↓) | Average (MMLU, PIQA, Lambada, HellaSwag) (↑) | WMDP Bio Robust MCQA (↓) | WMDP Bio Verified Cloze (↓) | MMLU (↑) | PIQA (↑) | Lambada (↑) | HellaSwag (↑) |
|:---------------------------------------------------------------------|:------------------------|:----------------------|:-----------------------------------------------------|:-----------------------------------------------|:---------------------------|:------------------------------|:---------------|:---------------|:---------------|:----------------|
| deep-ignorance-unfiltered | - | - | 39.66% | 56.05% | 42.97% | 36.34% | 44.92% | 76.44% | 47.08% | 55.75% |
| deep-ignorance-pretraining-stage-unfiltered | - | - | 37.16% (-2.50) | 60.24% (4.19) | 38.25% (-4.72) | 36.06% (-0.28) | 42.80% (-2.12) | 79.05% (2.61) | 63.03% (15.95) | 56.06% (0.31) |
| deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | 33.70% (-5.96) | 55.83% (-0.22) | 38.02% (-4.95) | 29.37% (-6.97) | 44.13% (-0.79) | 77.04% (0.60) | 46.85% (-0.23) | 55.29% (-0.46) |
| deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | 30.97% (-8.69) | 56.22% (0.17) | 36.75% (-6.22) | 25.19% (-11.15) | 43.16% (-1.76) | 77.20% (0.76) | 48.86% (1.78) | 55.67% (-0.08) |
| deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | 30.50% (-9.16) | 57.37% (1.32) | 35.25% (-7.72) | 25.74% (-10.60) | 43.91% (-1.01) | 78.35% (1.91) | 51.81% (4.73) | 55.41% (-0.34) |
| deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | 30.38% (-9.28) | 57.88% (1.83) | 33.99% (-8.98) | 26.77% (-9.57) | 44.82% (-0.10) | 76.88% (0.44) | 54.05% (6.97) | 55.78% (0.03) |
| deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | 29.90% (-9.76) | 55.53% (-0.52) | 35.37% (-7.60) | 24.44% (-11.90) | 43.21% (-1.71) | 75.73% (-0.71) | 47.29% (0.21) | 55.90% (0.15) |
| deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | 29.47% (-10.19) | 60.02% (3.97) | 33.29% (-9.68) | 25.65% (-10.69) | 43.46% (-1.46) | 79.27% (2.83) | 60.82% (13.74) | 56.53% (0.78) |
| deep-ignorance-unfiltered-cb | - | - | 29.29% (-10.37) | 54.11% (-1.94) | 29.49% (-13.48) | 29.09% (-7.25) | 43.61% (-1.31) | 76.50% (0.06) | 45.84% (-1.24) | 50.50% (-5.25) |
| deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | 29.12% (-10.54) | 58.98% (2.93) | 33.53% (-9.44) | 24.72% (-11.62) | 41.04% (-3.88) | 78.78% (2.34) | 60.57% (13.49) | 55.53% (-0.22) |
| deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | 26.92% (-12.74) | 58.00% (1.95) | 29.95% (-13.02) | 23.88% (-12.46) | 43.52% (-1.40) | 76.61% (0.17) | 56.01% (8.93) | 55.84% (0.09) |
| deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | 26.12% (-13.54) | 56.46% (0.41) | 25.46% (-17.51) | 26.77% (-9.57) | 41.45% (-3.47) | 76.33% (-0.11) | 53.64% (6.56) | 54.40% (-1.35) |
| deep-ignorance-unfiltered-cb-lat | - | - | 25.93% (-13.73) | 56.43% (0.38) | 27.42% (-15.55) | 24.44% (-11.90) | 42.73% (-2.19) | 76.22% (-0.22) | 51.85% (4.77) | 54.92% (-0.83) |
| deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | 25.87% (-13.79) | 56.60% (0.55) | 27.76% (-15.21) | 23.98% (-12.36) | 42.08% (-2.84) | 75.41% (-1.03) | 52.75% (5.67) | 56.18% (0.43) |
| deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | 25.56% (-14.10) | 52.60% (-3.45) | 25.00% (-17.97) | 26.12% (-10.22) | 39.45% (-5.47) | 75.35% (-1.09) | 47.56% (0.48) | 48.03% (-7.72) |
# Acknowledgments
This work was done in collaboration with the UK AI Security Institute and the University of Oxford.
We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite, May Dixit, Kateryna Halstead, James Zhang, Aytunç Ilhan, Peter Gebauer, A. Feder Cooper, Adam Gleave, Pietro Lesci, Ian McKenzie, Samuel Ratnam, Paul Rottger, Lydia O'Brien, Cameron Tice, Blake Bullwinkel, Nora Belrose, Patricia Paskov and Aviya Skowron for helpful discussions. Alex Robey and Alexandra Souly also provided valuable methodological input. Jai Patel coordinated collaboration logistics between EleutherAI and UK AISI. Iman Syed offered support related to compute behind our tampering experiments. Kyle O'Brien was partially supported financially by the Cambridge ERA:AI Fellowship.
GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments.
Our README was inspired by the Pythia, Qwen, and OLMo2 model suites.
|
EleutherAI/deep-ignorance-e2e-strong-filter-cb-lat
|
EleutherAI
| 2025-08-10T22:55:38Z | 8 | 0 | null |
[
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"safety",
"unlearning",
"data-filtering",
"interpretability",
"pretraining",
"eleutherai",
"gpt-neox",
"wmdp",
"cbrn",
"tamper-resistance",
"research",
"model-suite",
"6.9b",
"circuit-breaking",
"knowledge-filtering",
"open-weight",
"biothreat",
"safety-research",
"model-diffing",
"training-dynamics",
"en",
"dataset:EleutherAI/deep-ignorance-pretraining-mix",
"dataset:EleutherAI/deep-ignorance-annealing-mix",
"base_model:EleutherAI/deep-ignorance-e2e-strong-filter",
"base_model:finetune:EleutherAI/deep-ignorance-e2e-strong-filter",
"license:apache-2.0",
"region:us"
] | null | 2025-07-08T11:07:40Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- safety
- unlearning
- data-filtering
- interpretability
- pretraining
- eleutherai
- gpt-neox
- wmdp
- cbrn
- tamper-resistance
- research
- model-suite
- 6.9b
- circuit-breaking
- knowledge-filtering
- open-weight
- biothreat
- safety-research
- model-diffing
- training-dynamics
license: apache-2.0
datasets:
- EleutherAI/deep-ignorance-pretraining-mix
- EleutherAI/deep-ignorance-annealing-mix
base_model:
- EleutherAI/deep-ignorance-e2e-strong-filter
---
# Deep Ignorance Model Suite
We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering.
Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning [(see paper)](https://deepignorance.ai). It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11.
> **Support:**
> The #release-discussion channel in the [EleutherAI Discord](https://discord.gg/eleutherai) is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times.
> **Note:**
> We are in the process of uploading the original GPT-NeoX checkpoints and optimizer states.
## Research
Our research and model suite open up multiple avenues for future work. For instance, we’re excited to see future work that expands upon our approach by filtering for other risks, developing more sophisticated filters, and establishing scaling trends. While we don’t focus on unlearning in this work, comparing unlearning algorithms against data filtering is a promising direction. Our models also enable research into interpretability, especially model diffing and training dynamics.
We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks.
## Uses and Limitations
### Quickstart
We recommend starting with the following models as these are the ones studied most extensively in our paper.
| Model | Pretraining Filtering | Annealing Filtering | Post-training |
|:------|:---------------------|:-------------------|:--------------|
| [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - |
| [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - |
| [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training |
All models can be loaded for training and inference using HuggingFace transformers.
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal",
revision="global_step11921",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal",
revision="global_step11921",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `global_step11921` corresponds exactly to the model checkpoint on the `main` branch of each model. Specifying the revision allows you to load intermediate checkpoints. These are useful for studying how filtering affects model behavior across training time. Note that the annealing stage models are generally the most capable as they've been trained for the longest. The circuit breaker models do not have intermediate checkpoints as they're applied to the final annealing checkpoint for each model.
### Full Model List
| Model | Pretraining Filtering | Annealing Filtering | Post-training |
|:------|:---------------------|:-------------------|:--------------|
| **Unfiltered Baseline Models** | | | |
| [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - |
| [deep-ignorance-unfiltered-cb](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb) | - | - | Circuit Breaking |
| [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training |
| **Pretraining-Stage Only Models** | | | |
| [deep-ignorance-pretraining-stage-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-unfiltered) | - | - | - |
| [deep-ignorance-pretraining-stage-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter) | Extra Weak Filter | - | - |
| [deep-ignorance-pretraining-stage-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-weak-filter) | Weak Filter | - | - |
| [deep-ignorance-pretraining-stage-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-strong-filter) | Strong Filter | - | - |
| **End-to-End Filtered Models** | | | |
| [deep-ignorance-e2e-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-extra-weak-filter) | Extra Weak Filter | Extra Weak Filter | - |
| [deep-ignorance-e2e-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-weak-filter) | Weak Filter | Weak Filter | - |
| [deep-ignorance-weak-filter-pt-strong-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal) | Weak Filter | Strong Filter | - |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb) | Strong Filter | Weak Filter | Circuit Breaking |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat) | Strong Filter | Weak Filter | Circuit Breaking + Latent Adversarial Training |
| [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - |
| [deep-ignorance-e2e-strong-filter-cb](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb) | Strong Filter | Strong Filter | Circuit Breaking |
| [deep-ignorance-e2e-strong-filter-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb-lat) | Strong Filter | Strong Filter | Circuit Breaking + Latent Adversarial Training |
| [deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted) | Strong Filter | Strong Filter | Weak Knowledge Corruption via Synthetic Document Fine-Tuning |
| [deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted) | Strong Filter | Strong Filter | Strong Knowledge Corruption via Synthetic Document Fine-Tuning |
### Intended Use
Deep Ignorance is primarily intended for research into the behavior, functionality, and limitations of large language models, providing a controlled setting for conducting scientific experiments, with intermediate checkpoints for most models made available as branches hosted on Hugging Face.
Deep Ignorance models have not undergone any post-training. They often fall into repetition. They do not follow user instructions. Structured benchmarks work best for evaluating them. Applying post-training to these models could be valuable future work.
### Out-of-scope use
The Deep Ignorance Suite is not intended for deployment and is not a product for human-facing interactions. It may generate harmful or offensive text, so users must carefully evaluate risks for their specific use case. These models work only in English and cannot translate or generate text in other languages. They have not been fine-tuned for common uses like writing prose or powering commercial chatbots. Unlike ChatGPT, Deep Ignorance will not respond to prompts as expected because it lacks fine-tuning through methods like Reinforcement Learning from Human Feedback (RLHF).
## Training
All of our models undergo identical pretraining and annealing setups except for some data being removed by filters. All other hyperparameters are identical. This allows practitioners to make causal claims about data filtering's impact on training dynamics and behavior. Models trained on filtered datasets are trained for a little more than one epoch until they reach 550B training tokens in total.
### Training data
**[Pretraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-pretraining-mix)**: We utilize a deduplicated version of DCLM provided by ZyphraAI as our pretraining dataset. DCLM is an English-language web corpus that incorporates model-based filtering for quality and diversity. It has demonstrated success in training high-performing open-source language models. Our implementation uses approximately 500B tokens with the GPT-NeoX tokenizer, encompassing 409,935,485 documents.
**[Annealing/Midtraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-annealing-mix)**: Following pretraining, we perform an annealing phase with an additional 50B high-quality tokens. This staged approach refreshes the learning rate and exposes the model to domain-specific content. Our annealing mixture allocates 25B tokens (50%) to previously unseen DCLM data and 25B tokens to specialized content. The domain-specific portion emphasizes scientific and instructional data, including Flan (16.87%), StackExchange (2.82%), Pes2o (22.90%), Wikipedia (7.37%), and small amounts of Camel Bio, Chemistry, and Physics datasets (0.02% each). This composition targets improvements in knowledge benchmarks while maintaining broad capabilities.
## Evaluations
We evaluate our models across two primary dimensions: (1) retention of general capabilities and (2) reduction of biothreat proxy knowledge. This dual evaluation approach ensures that our filtering techniques effectively remove unwanted knowledge while preserving beneficial capabilities.
### Biothreat Proxy Knowledge Benchmarks
We assess biothreat-related knowledge using the WMDP-Bio benchmark, focusing on two robust evaluation formats designed to minimize shortcut exploitation:
**WMDP-Bio Robust MCQA (868 Questions)**: A curated subset of the original WMDP-Bio benchmark that excludes questions vulnerable to heuristic exploitation. We removed 405 questions (31.81%) where three different models could correctly answer based solely on the answer choices without seeing the question text. This subset provides a more reliable assessment of genuine biothreat proxy knowledge.
**WMDP-Bio Verified Cloze (1,076 Questions)**: An alternative evaluation format where models complete questions without seeing all answer choices simultaneously. We evaluate the length-normalized log probability of each answer separately, preventing models from using comparative heuristics between choices. Questions incompatible with cloze-style evaluation (e.g., "All of the above" or "Which of the following is most...") are excluded.
### General Capability Benchmarks
To ensure our filtering approach preserves beneficial knowledge, we evaluate on standard benchmarks:
<!-- - **MMLU-No-Bio**: 53 topics from MMLU excluding biology-related subjects, measuring broad knowledge retention
- **MMLU-Bio**: High school and college biology topics from MMLU, assessing benign biological knowledge -->
- **MMLU**: Factual knowledge across diverse topics
- **PIQA**: Physical commonsense reasoning tasks
- **LAMBADA**: Text comprehension requiring full-context understanding
- **HellaSwag**: Commonsense natural language inference
| Model | Pretraining Filtering | Annealing Filtering | WMDP Bio Average (Robust MCQA, Verified Cloze) (↓) | Average (MMLU, PIQA, Lambada, HellaSwag) (↑) | WMDP Bio Robust MCQA (↓) | WMDP Bio Verified Cloze (↓) | MMLU (↑) | PIQA (↑) | Lambada (↑) | HellaSwag (↑) |
|:---------------------------------------------------------------------|:------------------------|:----------------------|:-----------------------------------------------------|:-----------------------------------------------|:---------------------------|:------------------------------|:---------------|:---------------|:---------------|:----------------|
| deep-ignorance-unfiltered | - | - | 39.66% | 56.05% | 42.97% | 36.34% | 44.92% | 76.44% | 47.08% | 55.75% |
| deep-ignorance-pretraining-stage-unfiltered | - | - | 37.16% (-2.50) | 60.24% (4.19) | 38.25% (-4.72) | 36.06% (-0.28) | 42.80% (-2.12) | 79.05% (2.61) | 63.03% (15.95) | 56.06% (0.31) |
| deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | 33.70% (-5.96) | 55.83% (-0.22) | 38.02% (-4.95) | 29.37% (-6.97) | 44.13% (-0.79) | 77.04% (0.60) | 46.85% (-0.23) | 55.29% (-0.46) |
| deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | 30.97% (-8.69) | 56.22% (0.17) | 36.75% (-6.22) | 25.19% (-11.15) | 43.16% (-1.76) | 77.20% (0.76) | 48.86% (1.78) | 55.67% (-0.08) |
| deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | 30.50% (-9.16) | 57.37% (1.32) | 35.25% (-7.72) | 25.74% (-10.60) | 43.91% (-1.01) | 78.35% (1.91) | 51.81% (4.73) | 55.41% (-0.34) |
| deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | 30.38% (-9.28) | 57.88% (1.83) | 33.99% (-8.98) | 26.77% (-9.57) | 44.82% (-0.10) | 76.88% (0.44) | 54.05% (6.97) | 55.78% (0.03) |
| deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | 29.90% (-9.76) | 55.53% (-0.52) | 35.37% (-7.60) | 24.44% (-11.90) | 43.21% (-1.71) | 75.73% (-0.71) | 47.29% (0.21) | 55.90% (0.15) |
| deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | 29.47% (-10.19) | 60.02% (3.97) | 33.29% (-9.68) | 25.65% (-10.69) | 43.46% (-1.46) | 79.27% (2.83) | 60.82% (13.74) | 56.53% (0.78) |
| deep-ignorance-unfiltered-cb | - | - | 29.29% (-10.37) | 54.11% (-1.94) | 29.49% (-13.48) | 29.09% (-7.25) | 43.61% (-1.31) | 76.50% (0.06) | 45.84% (-1.24) | 50.50% (-5.25) |
| deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | 29.12% (-10.54) | 58.98% (2.93) | 33.53% (-9.44) | 24.72% (-11.62) | 41.04% (-3.88) | 78.78% (2.34) | 60.57% (13.49) | 55.53% (-0.22) |
| deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | 26.92% (-12.74) | 58.00% (1.95) | 29.95% (-13.02) | 23.88% (-12.46) | 43.52% (-1.40) | 76.61% (0.17) | 56.01% (8.93) | 55.84% (0.09) |
| deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | 26.12% (-13.54) | 56.46% (0.41) | 25.46% (-17.51) | 26.77% (-9.57) | 41.45% (-3.47) | 76.33% (-0.11) | 53.64% (6.56) | 54.40% (-1.35) |
| deep-ignorance-unfiltered-cb-lat | - | - | 25.93% (-13.73) | 56.43% (0.38) | 27.42% (-15.55) | 24.44% (-11.90) | 42.73% (-2.19) | 76.22% (-0.22) | 51.85% (4.77) | 54.92% (-0.83) |
| deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | 25.87% (-13.79) | 56.60% (0.55) | 27.76% (-15.21) | 23.98% (-12.36) | 42.08% (-2.84) | 75.41% (-1.03) | 52.75% (5.67) | 56.18% (0.43) |
| deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | 25.56% (-14.10) | 52.60% (-3.45) | 25.00% (-17.97) | 26.12% (-10.22) | 39.45% (-5.47) | 75.35% (-1.09) | 47.56% (0.48) | 48.03% (-7.72) |
# Acknowledgments
This work was done in collaboration with the UK AI Security Institute and the University of Oxford.
We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite, May Dixit, Kateryna Halstead, James Zhang, Aytunç Ilhan, Peter Gebauer, A. Feder Cooper, Adam Gleave, Pietro Lesci, Ian McKenzie, Samuel Ratnam, Paul Rottger, Lydia O'Brien, Cameron Tice, Blake Bullwinkel, Nora Belrose, Patricia Paskov and Aviya Skowron for helpful discussions. Alex Robey and Alexandra Souly also provided valuable methodological input. Jai Patel coordinated collaboration logistics between EleutherAI and UK AISI. Iman Syed offered support related to compute behind our tampering experiments. Kyle O'Brien was partially supported financially by the Cambridge ERA:AI Fellowship.
GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments.
Our README was inspired by the Pythia, Qwen, and OLMo2 model suites.
|
EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat
|
EleutherAI
| 2025-08-10T22:55:20Z | 8 | 0 | null |
[
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"safety",
"unlearning",
"data-filtering",
"interpretability",
"pretraining",
"eleutherai",
"gpt-neox",
"wmdp",
"cbrn",
"tamper-resistance",
"research",
"model-suite",
"6.9b",
"circuit-breaking",
"knowledge-filtering",
"open-weight",
"biothreat",
"safety-research",
"model-diffing",
"training-dynamics",
"en",
"dataset:EleutherAI/deep-ignorance-pretraining-mix",
"dataset:EleutherAI/deep-ignorance-annealing-mix",
"base_model:EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal",
"base_model:finetune:EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal",
"license:apache-2.0",
"region:us"
] | null | 2025-07-08T11:02:15Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- safety
- unlearning
- data-filtering
- interpretability
- pretraining
- eleutherai
- gpt-neox
- wmdp
- cbrn
- tamper-resistance
- research
- model-suite
- 6.9b
- circuit-breaking
- knowledge-filtering
- open-weight
- biothreat
- safety-research
- model-diffing
- training-dynamics
license: apache-2.0
datasets:
- EleutherAI/deep-ignorance-pretraining-mix
- EleutherAI/deep-ignorance-annealing-mix
base_model:
- EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal
---
# Deep Ignorance Model Suite
We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering.
Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning [(see paper)](https://deepignorance.ai). It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11.
> **Support:**
> The #release-discussion channel in the [EleutherAI Discord](https://discord.gg/eleutherai) is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times.
> **Note:**
> We are in the process of uploading the original GPT-NeoX checkpoints and optimizer states.
## Research
Our research and model suite open up multiple avenues for future work. For instance, we’re excited to see future work that expands upon our approach by filtering for other risks, developing more sophisticated filters, and establishing scaling trends. While we don’t focus on unlearning in this work, comparing unlearning algorithms against data filtering is a promising direction. Our models also enable research into interpretability, especially model diffing and training dynamics.
We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks.
## Uses and Limitations
### Quickstart
We recommend starting with the following models as these are the ones studied most extensively in our paper.
| Model | Pretraining Filtering | Annealing Filtering | Post-training |
|:------|:---------------------|:-------------------|:--------------|
| [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - |
| [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - |
| [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training |
All models can be loaded for training and inference using HuggingFace transformers.
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal",
revision="global_step11921",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal",
revision="global_step11921",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `global_step11921` corresponds exactly to the model checkpoint on the `main` branch of each model. Specifying the revision allows you to load intermediate checkpoints. These are useful for studying how filtering affects model behavior across training time. Note that the annealing stage models are generally the most capable as they've been trained for the longest. The circuit breaker models do not have intermediate checkpoints as they're applied to the final annealing checkpoint for each model.
### Full Model List
| Model | Pretraining Filtering | Annealing Filtering | Post-training |
|:------|:---------------------|:-------------------|:--------------|
| **Unfiltered Baseline Models** | | | |
| [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - |
| [deep-ignorance-unfiltered-cb](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb) | - | - | Circuit Breaking |
| [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training |
| **Pretraining-Stage Only Models** | | | |
| [deep-ignorance-pretraining-stage-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-unfiltered) | - | - | - |
| [deep-ignorance-pretraining-stage-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter) | Extra Weak Filter | - | - |
| [deep-ignorance-pretraining-stage-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-weak-filter) | Weak Filter | - | - |
| [deep-ignorance-pretraining-stage-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-strong-filter) | Strong Filter | - | - |
| **End-to-End Filtered Models** | | | |
| [deep-ignorance-e2e-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-extra-weak-filter) | Extra Weak Filter | Extra Weak Filter | - |
| [deep-ignorance-e2e-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-weak-filter) | Weak Filter | Weak Filter | - |
| [deep-ignorance-weak-filter-pt-strong-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal) | Weak Filter | Strong Filter | - |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb) | Strong Filter | Weak Filter | Circuit Breaking |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat) | Strong Filter | Weak Filter | Circuit Breaking + Latent Adversarial Training |
| [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - |
| [deep-ignorance-e2e-strong-filter-cb](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb) | Strong Filter | Strong Filter | Circuit Breaking |
| [deep-ignorance-e2e-strong-filter-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb-lat) | Strong Filter | Strong Filter | Circuit Breaking + Latent Adversarial Training |
| [deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted) | Strong Filter | Strong Filter | Weak Knowledge Corruption via Synthetic Document Fine-Tuning |
| [deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted) | Strong Filter | Strong Filter | Strong Knowledge Corruption via Synthetic Document Fine-Tuning |
### Intended Use
Deep Ignorance is primarily intended for research into the behavior, functionality, and limitations of large language models, providing a controlled setting for conducting scientific experiments, with intermediate checkpoints for most models made available as branches hosted on Hugging Face.
Deep Ignorance models have not undergone any post-training. They often fall into repetition. They do not follow user instructions. Structured benchmarks work best for evaluating them. Applying post-training to these models could be valuable future work.
### Out-of-scope use
The Deep Ignorance Suite is not intended for deployment and is not a product for human-facing interactions. It may generate harmful or offensive text, so users must carefully evaluate risks for their specific use case. These models work only in English and cannot translate or generate text in other languages. They have not been fine-tuned for common uses like writing prose or powering commercial chatbots. Unlike ChatGPT, Deep Ignorance will not respond to prompts as expected because it lacks fine-tuning through methods like Reinforcement Learning from Human Feedback (RLHF).
## Training
All of our models undergo identical pretraining and annealing setups except for some data being removed by filters. All other hyperparameters are identical. This allows practitioners to make causal claims about data filtering's impact on training dynamics and behavior. Models trained on filtered datasets are trained for a little more than one epoch until they reach 550B training tokens in total.
### Training data
**[Pretraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-pretraining-mix)**: We utilize a deduplicated version of DCLM provided by ZyphraAI as our pretraining dataset. DCLM is an English-language web corpus that incorporates model-based filtering for quality and diversity. It has demonstrated success in training high-performing open-source language models. Our implementation uses approximately 500B tokens with the GPT-NeoX tokenizer, encompassing 409,935,485 documents.
**[Annealing/Midtraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-annealing-mix)**: Following pretraining, we perform an annealing phase with an additional 50B high-quality tokens. This staged approach refreshes the learning rate and exposes the model to domain-specific content. Our annealing mixture allocates 25B tokens (50%) to previously unseen DCLM data and 25B tokens to specialized content. The domain-specific portion emphasizes scientific and instructional data, including Flan (16.87%), StackExchange (2.82%), Pes2o (22.90%), Wikipedia (7.37%), and small amounts of Camel Bio, Chemistry, and Physics datasets (0.02% each). This composition targets improvements in knowledge benchmarks while maintaining broad capabilities.
## Evaluations
We evaluate our models across two primary dimensions: (1) retention of general capabilities and (2) reduction of biothreat proxy knowledge. This dual evaluation approach ensures that our filtering techniques effectively remove unwanted knowledge while preserving beneficial capabilities.
### Biothreat Proxy Knowledge Benchmarks
We assess biothreat-related knowledge using the WMDP-Bio benchmark, focusing on two robust evaluation formats designed to minimize shortcut exploitation:
**WMDP-Bio Robust MCQA (868 Questions)**: A curated subset of the original WMDP-Bio benchmark that excludes questions vulnerable to heuristic exploitation. We removed 405 questions (31.81%) where three different models could correctly answer based solely on the answer choices without seeing the question text. This subset provides a more reliable assessment of genuine biothreat proxy knowledge.
**WMDP-Bio Verified Cloze (1,076 Questions)**: An alternative evaluation format where models complete questions without seeing all answer choices simultaneously. We evaluate the length-normalized log probability of each answer separately, preventing models from using comparative heuristics between choices. Questions incompatible with cloze-style evaluation (e.g., "All of the above" or "Which of the following is most...") are excluded.
### General Capability Benchmarks
To ensure our filtering approach preserves beneficial knowledge, we evaluate on standard benchmarks:
<!-- - **MMLU-No-Bio**: 53 topics from MMLU excluding biology-related subjects, measuring broad knowledge retention
- **MMLU-Bio**: High school and college biology topics from MMLU, assessing benign biological knowledge -->
- **MMLU**: Factual knowledge across diverse topics
- **PIQA**: Physical commonsense reasoning tasks
- **LAMBADA**: Text comprehension requiring full-context understanding
- **HellaSwag**: Commonsense natural language inference
| Model | Pretraining Filtering | Annealing Filtering | WMDP Bio Average (Robust MCQA, Verified Cloze) (↓) | Average (MMLU, PIQA, Lambada, HellaSwag) (↑) | WMDP Bio Robust MCQA (↓) | WMDP Bio Verified Cloze (↓) | MMLU (↑) | PIQA (↑) | Lambada (↑) | HellaSwag (↑) |
|:---------------------------------------------------------------------|:------------------------|:----------------------|:-----------------------------------------------------|:-----------------------------------------------|:---------------------------|:------------------------------|:---------------|:---------------|:---------------|:----------------|
| deep-ignorance-unfiltered | - | - | 39.66% | 56.05% | 42.97% | 36.34% | 44.92% | 76.44% | 47.08% | 55.75% |
| deep-ignorance-pretraining-stage-unfiltered | - | - | 37.16% (-2.50) | 60.24% (4.19) | 38.25% (-4.72) | 36.06% (-0.28) | 42.80% (-2.12) | 79.05% (2.61) | 63.03% (15.95) | 56.06% (0.31) |
| deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | 33.70% (-5.96) | 55.83% (-0.22) | 38.02% (-4.95) | 29.37% (-6.97) | 44.13% (-0.79) | 77.04% (0.60) | 46.85% (-0.23) | 55.29% (-0.46) |
| deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | 30.97% (-8.69) | 56.22% (0.17) | 36.75% (-6.22) | 25.19% (-11.15) | 43.16% (-1.76) | 77.20% (0.76) | 48.86% (1.78) | 55.67% (-0.08) |
| deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | 30.50% (-9.16) | 57.37% (1.32) | 35.25% (-7.72) | 25.74% (-10.60) | 43.91% (-1.01) | 78.35% (1.91) | 51.81% (4.73) | 55.41% (-0.34) |
| deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | 30.38% (-9.28) | 57.88% (1.83) | 33.99% (-8.98) | 26.77% (-9.57) | 44.82% (-0.10) | 76.88% (0.44) | 54.05% (6.97) | 55.78% (0.03) |
| deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | 29.90% (-9.76) | 55.53% (-0.52) | 35.37% (-7.60) | 24.44% (-11.90) | 43.21% (-1.71) | 75.73% (-0.71) | 47.29% (0.21) | 55.90% (0.15) |
| deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | 29.47% (-10.19) | 60.02% (3.97) | 33.29% (-9.68) | 25.65% (-10.69) | 43.46% (-1.46) | 79.27% (2.83) | 60.82% (13.74) | 56.53% (0.78) |
| deep-ignorance-unfiltered-cb | - | - | 29.29% (-10.37) | 54.11% (-1.94) | 29.49% (-13.48) | 29.09% (-7.25) | 43.61% (-1.31) | 76.50% (0.06) | 45.84% (-1.24) | 50.50% (-5.25) |
| deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | 29.12% (-10.54) | 58.98% (2.93) | 33.53% (-9.44) | 24.72% (-11.62) | 41.04% (-3.88) | 78.78% (2.34) | 60.57% (13.49) | 55.53% (-0.22) |
| deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | 26.92% (-12.74) | 58.00% (1.95) | 29.95% (-13.02) | 23.88% (-12.46) | 43.52% (-1.40) | 76.61% (0.17) | 56.01% (8.93) | 55.84% (0.09) |
| deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | 26.12% (-13.54) | 56.46% (0.41) | 25.46% (-17.51) | 26.77% (-9.57) | 41.45% (-3.47) | 76.33% (-0.11) | 53.64% (6.56) | 54.40% (-1.35) |
| deep-ignorance-unfiltered-cb-lat | - | - | 25.93% (-13.73) | 56.43% (0.38) | 27.42% (-15.55) | 24.44% (-11.90) | 42.73% (-2.19) | 76.22% (-0.22) | 51.85% (4.77) | 54.92% (-0.83) |
| deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | 25.87% (-13.79) | 56.60% (0.55) | 27.76% (-15.21) | 23.98% (-12.36) | 42.08% (-2.84) | 75.41% (-1.03) | 52.75% (5.67) | 56.18% (0.43) |
| deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | 25.56% (-14.10) | 52.60% (-3.45) | 25.00% (-17.97) | 26.12% (-10.22) | 39.45% (-5.47) | 75.35% (-1.09) | 47.56% (0.48) | 48.03% (-7.72) |
# Acknowledgments
This work was done in collaboration with the UK AI Security Institute and the University of Oxford.
We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite, May Dixit, Kateryna Halstead, James Zhang, Aytunç Ilhan, Peter Gebauer, A. Feder Cooper, Adam Gleave, Pietro Lesci, Ian McKenzie, Samuel Ratnam, Paul Rottger, Lydia O'Brien, Cameron Tice, Blake Bullwinkel, Nora Belrose, Patricia Paskov and Aviya Skowron for helpful discussions. Alex Robey and Alexandra Souly also provided valuable methodological input. Jai Patel coordinated collaboration logistics between EleutherAI and UK AISI. Iman Syed offered support related to compute behind our tampering experiments. Kyle O'Brien was partially supported financially by the Cambridge ERA:AI Fellowship.
GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments.
Our README was inspired by the Pythia, Qwen, and OLMo2 model suites.
|
EleutherAI/deep-ignorance-pretraining-stage-strong-filter
|
EleutherAI
| 2025-08-10T22:47:38Z | 361 | 0 | null |
[
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"safety",
"unlearning",
"data-filtering",
"interpretability",
"pretraining",
"eleutherai",
"gpt-neox",
"wmdp",
"cbrn",
"tamper-resistance",
"research",
"model-suite",
"6.9b",
"circuit-breaking",
"knowledge-filtering",
"open-weight",
"biothreat",
"safety-research",
"model-diffing",
"training-dynamics",
"en",
"dataset:EleutherAI/deep-ignorance-pretraining-mix",
"dataset:EleutherAI/deep-ignorance-annealing-mix",
"license:apache-2.0",
"region:us"
] | null | 2025-07-06T17:03:00Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- safety
- unlearning
- data-filtering
- interpretability
- pretraining
- eleutherai
- gpt-neox
- wmdp
- cbrn
- tamper-resistance
- research
- model-suite
- 6.9b
- circuit-breaking
- knowledge-filtering
- open-weight
- biothreat
- safety-research
- model-diffing
- training-dynamics
license: apache-2.0
datasets:
- EleutherAI/deep-ignorance-pretraining-mix
- EleutherAI/deep-ignorance-annealing-mix
---
# Deep Ignorance Model Suite
We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering.
Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning [(see paper)](https://deepignorance.ai). It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11.
> **Support:**
> The #release-discussion channel in the [EleutherAI Discord](https://discord.gg/eleutherai) is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times.
> **Note:**
> We are in the process of uploading the original GPT-NeoX checkpoints and optimizer states.
## Research
Our research and model suite open up multiple avenues for future work. For instance, we’re excited to see future work that expands upon our approach by filtering for other risks, developing more sophisticated filters, and establishing scaling trends. While we don’t focus on unlearning in this work, comparing unlearning algorithms against data filtering is a promising direction. Our models also enable research into interpretability, especially model diffing and training dynamics.
We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks.
## Uses and Limitations
### Quickstart
We recommend starting with the following models as these are the ones studied most extensively in our paper.
| Model | Pretraining Filtering | Annealing Filtering | Post-training |
|:------|:---------------------|:-------------------|:--------------|
| [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - |
| [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - |
| [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training |
All models can be loaded for training and inference using HuggingFace transformers.
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal",
revision="global_step11921",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal",
revision="global_step11921",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `global_step11921` corresponds exactly to the model checkpoint on the `main` branch of each model. Specifying the revision allows you to load intermediate checkpoints. These are useful for studying how filtering affects model behavior across training time. Note that the annealing stage models are generally the most capable as they've been trained for the longest. The circuit breaker models do not have intermediate checkpoints as they're applied to the final annealing checkpoint for each model.
### Full Model List
| Model | Pretraining Filtering | Annealing Filtering | Post-training |
|:------|:---------------------|:-------------------|:--------------|
| **Unfiltered Baseline Models** | | | |
| [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - |
| [deep-ignorance-unfiltered-cb](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb) | - | - | Circuit Breaking |
| [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training |
| **Pretraining-Stage Only Models** | | | |
| [deep-ignorance-pretraining-stage-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-unfiltered) | - | - | - |
| [deep-ignorance-pretraining-stage-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter) | Extra Weak Filter | - | - |
| [deep-ignorance-pretraining-stage-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-weak-filter) | Weak Filter | - | - |
| [deep-ignorance-pretraining-stage-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-strong-filter) | Strong Filter | - | - |
| **End-to-End Filtered Models** | | | |
| [deep-ignorance-e2e-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-extra-weak-filter) | Extra Weak Filter | Extra Weak Filter | - |
| [deep-ignorance-e2e-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-weak-filter) | Weak Filter | Weak Filter | - |
| [deep-ignorance-weak-filter-pt-strong-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal) | Weak Filter | Strong Filter | - |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb) | Strong Filter | Weak Filter | Circuit Breaking |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat) | Strong Filter | Weak Filter | Circuit Breaking + Latent Adversarial Training |
| [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - |
| [deep-ignorance-e2e-strong-filter-cb](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb) | Strong Filter | Strong Filter | Circuit Breaking |
| [deep-ignorance-e2e-strong-filter-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb-lat) | Strong Filter | Strong Filter | Circuit Breaking + Latent Adversarial Training |
| [deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted) | Strong Filter | Strong Filter | Weak Knowledge Corruption via Synthetic Document Fine-Tuning |
| [deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted) | Strong Filter | Strong Filter | Strong Knowledge Corruption via Synthetic Document Fine-Tuning |
### Intended Use
Deep Ignorance is primarily intended for research into the behavior, functionality, and limitations of large language models, providing a controlled setting for conducting scientific experiments, with intermediate checkpoints for most models made available as branches hosted on Hugging Face.
Deep Ignorance models have not undergone any post-training. They often fall into repetition. They do not follow user instructions. Structured benchmarks work best for evaluating them. Applying post-training to these models could be valuable future work.
### Out-of-scope use
The Deep Ignorance Suite is not intended for deployment and is not a product for human-facing interactions. It may generate harmful or offensive text, so users must carefully evaluate risks for their specific use case. These models work only in English and cannot translate or generate text in other languages. They have not been fine-tuned for common uses like writing prose or powering commercial chatbots. Unlike ChatGPT, Deep Ignorance will not respond to prompts as expected because it lacks fine-tuning through methods like Reinforcement Learning from Human Feedback (RLHF).
## Training
All of our models undergo identical pretraining and annealing setups except for some data being removed by filters. All other hyperparameters are identical. This allows practitioners to make causal claims about data filtering's impact on training dynamics and behavior. Models trained on filtered datasets are trained for a little more than one epoch until they reach 550B training tokens in total.
### Training data
**[Pretraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-pretraining-mix)**: We utilize a deduplicated version of DCLM provided by ZyphraAI as our pretraining dataset. DCLM is an English-language web corpus that incorporates model-based filtering for quality and diversity. It has demonstrated success in training high-performing open-source language models. Our implementation uses approximately 500B tokens with the GPT-NeoX tokenizer, encompassing 409,935,485 documents.
**[Annealing/Midtraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-annealing-mix)**: Following pretraining, we perform an annealing phase with an additional 50B high-quality tokens. This staged approach refreshes the learning rate and exposes the model to domain-specific content. Our annealing mixture allocates 25B tokens (50%) to previously unseen DCLM data and 25B tokens to specialized content. The domain-specific portion emphasizes scientific and instructional data, including Flan (16.87%), StackExchange (2.82%), Pes2o (22.90%), Wikipedia (7.37%), and small amounts of Camel Bio, Chemistry, and Physics datasets (0.02% each). This composition targets improvements in knowledge benchmarks while maintaining broad capabilities.
## Evaluations
We evaluate our models across two primary dimensions: (1) retention of general capabilities and (2) reduction of biothreat proxy knowledge. This dual evaluation approach ensures that our filtering techniques effectively remove unwanted knowledge while preserving beneficial capabilities.
### Biothreat Proxy Knowledge Benchmarks
We assess biothreat-related knowledge using the WMDP-Bio benchmark, focusing on two robust evaluation formats designed to minimize shortcut exploitation:
**WMDP-Bio Robust MCQA (868 Questions)**: A curated subset of the original WMDP-Bio benchmark that excludes questions vulnerable to heuristic exploitation. We removed 405 questions (31.81%) where three different models could correctly answer based solely on the answer choices without seeing the question text. This subset provides a more reliable assessment of genuine biothreat proxy knowledge.
**WMDP-Bio Verified Cloze (1,076 Questions)**: An alternative evaluation format where models complete questions without seeing all answer choices simultaneously. We evaluate the length-normalized log probability of each answer separately, preventing models from using comparative heuristics between choices. Questions incompatible with cloze-style evaluation (e.g., "All of the above" or "Which of the following is most...") are excluded.
### General Capability Benchmarks
To ensure our filtering approach preserves beneficial knowledge, we evaluate on standard benchmarks:
<!-- - **MMLU-No-Bio**: 53 topics from MMLU excluding biology-related subjects, measuring broad knowledge retention
- **MMLU-Bio**: High school and college biology topics from MMLU, assessing benign biological knowledge -->
- **MMLU**: Factual knowledge across diverse topics
- **PIQA**: Physical commonsense reasoning tasks
- **LAMBADA**: Text comprehension requiring full-context understanding
- **HellaSwag**: Commonsense natural language inference
| Model | Pretraining Filtering | Annealing Filtering | WMDP Bio Average (Robust MCQA, Verified Cloze) (↓) | Average (MMLU, PIQA, Lambada, HellaSwag) (↑) | WMDP Bio Robust MCQA (↓) | WMDP Bio Verified Cloze (↓) | MMLU (↑) | PIQA (↑) | Lambada (↑) | HellaSwag (↑) |
|:---------------------------------------------------------------------|:------------------------|:----------------------|:-----------------------------------------------------|:-----------------------------------------------|:---------------------------|:------------------------------|:---------------|:---------------|:---------------|:----------------|
| deep-ignorance-unfiltered | - | - | 39.66% | 56.05% | 42.97% | 36.34% | 44.92% | 76.44% | 47.08% | 55.75% |
| deep-ignorance-pretraining-stage-unfiltered | - | - | 37.16% (-2.50) | 60.24% (4.19) | 38.25% (-4.72) | 36.06% (-0.28) | 42.80% (-2.12) | 79.05% (2.61) | 63.03% (15.95) | 56.06% (0.31) |
| deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | 33.70% (-5.96) | 55.83% (-0.22) | 38.02% (-4.95) | 29.37% (-6.97) | 44.13% (-0.79) | 77.04% (0.60) | 46.85% (-0.23) | 55.29% (-0.46) |
| deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | 30.97% (-8.69) | 56.22% (0.17) | 36.75% (-6.22) | 25.19% (-11.15) | 43.16% (-1.76) | 77.20% (0.76) | 48.86% (1.78) | 55.67% (-0.08) |
| deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | 30.50% (-9.16) | 57.37% (1.32) | 35.25% (-7.72) | 25.74% (-10.60) | 43.91% (-1.01) | 78.35% (1.91) | 51.81% (4.73) | 55.41% (-0.34) |
| deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | 30.38% (-9.28) | 57.88% (1.83) | 33.99% (-8.98) | 26.77% (-9.57) | 44.82% (-0.10) | 76.88% (0.44) | 54.05% (6.97) | 55.78% (0.03) |
| deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | 29.90% (-9.76) | 55.53% (-0.52) | 35.37% (-7.60) | 24.44% (-11.90) | 43.21% (-1.71) | 75.73% (-0.71) | 47.29% (0.21) | 55.90% (0.15) |
| deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | 29.47% (-10.19) | 60.02% (3.97) | 33.29% (-9.68) | 25.65% (-10.69) | 43.46% (-1.46) | 79.27% (2.83) | 60.82% (13.74) | 56.53% (0.78) |
| deep-ignorance-unfiltered-cb | - | - | 29.29% (-10.37) | 54.11% (-1.94) | 29.49% (-13.48) | 29.09% (-7.25) | 43.61% (-1.31) | 76.50% (0.06) | 45.84% (-1.24) | 50.50% (-5.25) |
| deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | 29.12% (-10.54) | 58.98% (2.93) | 33.53% (-9.44) | 24.72% (-11.62) | 41.04% (-3.88) | 78.78% (2.34) | 60.57% (13.49) | 55.53% (-0.22) |
| deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | 26.92% (-12.74) | 58.00% (1.95) | 29.95% (-13.02) | 23.88% (-12.46) | 43.52% (-1.40) | 76.61% (0.17) | 56.01% (8.93) | 55.84% (0.09) |
| deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | 26.12% (-13.54) | 56.46% (0.41) | 25.46% (-17.51) | 26.77% (-9.57) | 41.45% (-3.47) | 76.33% (-0.11) | 53.64% (6.56) | 54.40% (-1.35) |
| deep-ignorance-unfiltered-cb-lat | - | - | 25.93% (-13.73) | 56.43% (0.38) | 27.42% (-15.55) | 24.44% (-11.90) | 42.73% (-2.19) | 76.22% (-0.22) | 51.85% (4.77) | 54.92% (-0.83) |
| deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | 25.87% (-13.79) | 56.60% (0.55) | 27.76% (-15.21) | 23.98% (-12.36) | 42.08% (-2.84) | 75.41% (-1.03) | 52.75% (5.67) | 56.18% (0.43) |
| deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | 25.56% (-14.10) | 52.60% (-3.45) | 25.00% (-17.97) | 26.12% (-10.22) | 39.45% (-5.47) | 75.35% (-1.09) | 47.56% (0.48) | 48.03% (-7.72) |
# Acknowledgments
This work was done in collaboration with the UK AI Security Institute and the University of Oxford.
We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite, May Dixit, Kateryna Halstead, James Zhang, Aytunç Ilhan, Peter Gebauer, A. Feder Cooper, Adam Gleave, Pietro Lesci, Ian McKenzie, Samuel Ratnam, Paul Rottger, Lydia O'Brien, Cameron Tice, Blake Bullwinkel, Nora Belrose, Patricia Paskov and Aviya Skowron for helpful discussions. Alex Robey and Alexandra Souly also provided valuable methodological input. Jai Patel coordinated collaboration logistics between EleutherAI and UK AISI. Iman Syed offered support related to compute behind our tampering experiments. Kyle O'Brien was partially supported financially by the Cambridge ERA:AI Fellowship.
GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments.
Our README was inspired by the Pythia, Qwen, and OLMo2 model suites.
|
sukrucildirr/blockassist-bc-miniature_frisky_cobra_1754865919
|
sukrucildirr
| 2025-08-10T22:46:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"miniature frisky cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T22:46:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- miniature frisky cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mowen222/task-13-Qwen-Qwen2.5-3B-Instruct
|
mowen222
| 2025-08-10T22:35:22Z | 29 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-08-10T01:12:35Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
fbaldassarri/EleutherAI_pythia-1.4b-autoawq-int4-gs64-asym
|
fbaldassarri
| 2025-08-10T22:12:02Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"autoround",
"intel-autoround",
"auto-round",
"intel",
"woq",
"awq",
"auto-awq",
"autoawq",
"eleutheraI",
"text-generation",
"en",
"dataset:EleutherAI/pile",
"base_model:EleutherAI/pythia-1.4b",
"base_model:quantized:EleutherAI/pythia-1.4b",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-10T22:07:26Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- autoround
- intel-autoround
- auto-round
- intel
- woq
- awq
- auto-awq
- autoawq
- eleutheraI
license: apache-2.0
model_name: Pythia 1.4b
base_model: EleutherAI/pythia-1.4b
inference: false
model_creator: EleutherAI
datasets:
- EleutherAI/pile
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [EleutherAI/pythia-1.4b](https://huggingface.co/fbaldassarri/EleutherAI/pythia-1.4b) using torch.float32 for quantization tuning.
- 4 bits (INT4)
- group size = 64
- Asymmetrical Quantization
- Method WoQ: AWQ (AutoAWQ algorithm)
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.5.1
Note: this INT4 version of pythia-1.4b has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.5.1.tar.gz
tar -xvzf v0.5.1.tar.gz
cd auto-round-0.5.1
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "EleutherAI/pythia-1.4b"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 4, 64, False, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/EleutherAI_pythia-1.4b-autoawq-int4-gs64-asym"
autoround.save_quantized(output_dir, format='auto_awq', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
m-mulet/try2_qwen_2.5_7b-owl_student_removed_top_10_influential
|
m-mulet
| 2025-08-10T22:10:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T22:10:36Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** m-mulet
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
longhoang2112/whisper-small-fine-tuning-2steps-slu
|
longhoang2112
| 2025-08-10T22:09:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2025-08-10T22:09:32Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
fbaldassarri/EleutherAI_pythia-1.4b-autogptq-int4-gs64-sym
|
fbaldassarri
| 2025-08-10T22:06:04Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"autoround",
"intel-autoround",
"auto-round",
"intel",
"woq",
"gptq",
"auto-gptq",
"autogptq",
"eleutheraI",
"text-generation",
"en",
"dataset:EleutherAI/pile",
"base_model:EleutherAI/pythia-1.4b",
"base_model:quantized:EleutherAI/pythia-1.4b",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-10T22:01:26Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- autoround
- intel-autoround
- auto-round
- intel
- woq
- gptq
- auto-gptq
- autogptq
- eleutheraI
license: apache-2.0
model_name: Pythia 1.4b
base_model: EleutherAI/pythia-1.4b
inference: false
model_creator: EleutherAI
datasets:
- EleutherAI/pile
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [EleutherAI/pythia-1.4b](https://huggingface.co/fbaldassarri/EleutherAI/pythia-1.4b) using torch.float32 for quantization tuning.
- 4 bits (INT4)
- group size = 64
- Symmetrical Quantization
- Method WoQ: GPTQ (AutoGPTQ algorithm)
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.5.1
Note: this INT4 version of pythia-1.4b has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.5.1.tar.gz
tar -xvzf v0.5.1.tar.gz
cd auto-round-0.5.1
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "EleutherAI/pythia-1.4b"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 4, 64, True, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/EleutherAI_pythia-1.4b-autogptq-int4-gs64-sym"
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
roeker/blockassist-bc-quick_wiry_owl_1754863285
|
roeker
| 2025-08-10T22:03:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T22:02:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
annasoli/Qwen2.5-14B_DP24_R1_masc_career
|
annasoli
| 2025-08-10T22:01:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-14B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T21:51:20Z |
---
base_model: unsloth/Qwen2.5-14B-Instruct
library_name: transformers
model_name: Qwen2.5-14B_DP24_R1_masc_career
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-14B_DP24_R1_masc_career
This model is a fine-tuned version of [unsloth/Qwen2.5-14B-Instruct](https://huggingface.co/unsloth/Qwen2.5-14B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="annasoli/Qwen2.5-14B_DP24_R1_masc_career", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/NN-MATS-T/clarifying-em/runs/tuqayu80)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.