modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
alexiseeifl/blockassist-bc-fleecy_flapping_pigeon_1757603390
|
alexiseeifl
| 2025-09-11T15:09:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fleecy flapping pigeon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:09:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fleecy flapping pigeon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hbfc7671/blockassist-bc-mighty_small_fox_1757603365
|
hbfc7671
| 2025-09-11T15:09:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mighty small fox",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:09:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mighty small fox
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mehere23/gpt-oss-20b
|
mehere23
| 2025-09-11T15:09:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"vllm",
"conversational",
"arxiv:2508.10925",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"mxfp4",
"region:us"
] |
text-generation
| 2025-09-11T15:08:14Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
---
<p align="center">
<img alt="gpt-oss-20b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-20b.svg">
</p>
<p align="center">
<a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> ·
<a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> ·
<a href="https://arxiv.org/abs/2508.10925"><strong>Model card</strong></a> ·
<a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a>
</p>
<br>
Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We’re releasing two flavors of these open models:
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters)
- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
> [!NOTE]
> This model card is dedicated to the smaller `gpt-oss-20b` model. Check out [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) for the larger model.
# Highlights
* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
* **MXFP4 quantization:** The models were post-trained with MXFP4 quantization of the MoE weights, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory. All evals were performed with the same MXFP4 quantization.
---
# Inference examples
## Transformers
You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package.
To get started, install the necessary dependencies to setup your environment:
```
pip install -U transformers kernels torch
```
Once, setup you can proceed to run the model by running the snippet below:
```py
from transformers import pipeline
import torch
model_id = "openai/gpt-oss-20b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver:
```
transformers serve
transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b
```
[Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
## vLLM
vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
```bash
uv pip install --pre vllm==0.10.1+gptoss \
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
--index-strategy unsafe-best-match
vllm serve openai/gpt-oss-20b
```
[Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm)
## PyTorch / Triton
To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation).
## Ollama
If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download).
```bash
# gpt-oss-20b
ollama pull gpt-oss:20b
ollama run gpt-oss:20b
```
[Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama)
#### LM Studio
If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download.
```bash
# gpt-oss-20b
lms get openai/gpt-oss-20b
```
Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners.
---
# Download the model
You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI:
```shell
# gpt-oss-20b
huggingface-cli download openai/gpt-oss-20b --include "original/*" --local-dir gpt-oss-20b/
pip install gpt-oss
python -m gpt_oss.chat model/
```
# Reasoning levels
You can adjust the reasoning level that suits your task across three levels:
* **Low:** Fast responses for general dialogue.
* **Medium:** Balanced speed and detail.
* **High:** Deep and detailed analysis.
The reasoning level can be set in the system prompts, e.g., "Reasoning: high".
# Tool use
The gpt-oss models are excellent for:
* Web browsing (using built-in browsing tools)
* Function calling with defined schemas
* Agentic operations like browser tasks
# Fine-tuning
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
This smaller model `gpt-oss-20b` can be fine-tuned on consumer hardware, whereas the larger [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) can be fine-tuned on a single H100 node.
# Citation
```bibtex
@misc{openai2025gptoss120bgptoss20bmodel,
title={gpt-oss-120b & gpt-oss-20b Model Card},
author={OpenAI},
year={2025},
eprint={2508.10925},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.10925},
}
```
|
slatinlatrina/blockassist-bc-mammalian_sneaky_prawn_1757603343
|
slatinlatrina
| 2025-09-11T15:09:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tame dormant hyena",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:09:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tame dormant hyena
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
HaniBO/test2_gguf
|
HaniBO
| 2025-09-11T15:09:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gguf",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T14:02:07Z |
---
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Phi-3-mini-4k-instruct-bnb-4bit
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
jobs-git/Kimi-K2-Instruct-GGUF
|
jobs-git
| 2025-09-11T15:08:59Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"deepseek_v3",
"text-generation",
"unsloth",
"custom_code",
"base_model:moonshotai/Kimi-K2-Instruct",
"base_model:quantized:moonshotai/Kimi-K2-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"fp8",
"region:us",
"conversational"
] |
text-generation
| 2025-09-11T15:08:59Z |
---
tags:
- unsloth
base_model:
- moonshotai/Kimi-K2-Instruct
license: other
license_link: LICENSE.md
license_name: modified-mit
library_name: transformers
---
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>Learn how to run Kimi-K2 Dynamic GGUFs - <a href="https://docs.unsloth.ai/basics/kimi-k2">Read our Guide!</a></strong>
</p>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
</p>
<div style="margin-top: 0;display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/deepseek-r1-0528-how-to-run-locally">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
<h1 style="margin-top: 0rem;">🌙 Kimi K2 Usage Guidelines</h1>
</div>
- You can now use the latest update of [llama.cpp](https://github.com/ggml-org/llama.cpp) to run the model.
- For complete detailed instructions, see our guide: [docs.unsloth.ai/basics/kimi-k2](https://docs.unsloth.ai/basics/kimi-k2)
It is recommended to have at least 128GB unified RAM memory to run the small quants. With 16GB VRAM and 256 RAM, expect 5+ tokens/sec.
For best results, use any 2-bit XL quant or above.
Set the temperature to 0.6 recommended) to reduce repetition and incoherence.
---
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/MoonshotAI/Kimi-K2/main/figures/kimi-logo.png" width="30%" alt="Kimi K2: Open Agentic Intellignece">
</picture>
</div>
<hr>
<div align="center" style="line-height:1">
<a href="https://www.kimi.com" target="_blank"><img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-Kimi%20K2-ff6b6b?color=1783ff&logoColor=white"/></a>
<a href="https://github.com/moonshotai/Kimi-K2"><img alt="github" src="https://img.shields.io/badge/🤖%20Github-Kimi%20K2-ff6b6b?color=1783ff&logoColor=white"/></a>
<a href="https://www.moonshot.ai" target="_blank"><img alt="Homepage" src="https://img.shields.io/badge/Homepage-Moonshot%20AI-white?logo=Kimi&logoColor=white"/></a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://huggingface.co/moonshotai" target="_blank"><img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Moonshot%20AI-ffc107?color=ffc107&logoColor=white"/></a>
<a href="https://twitter.com/kimi_moonshot" target="_blank"><img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-Kimi.ai-white?logo=x&logoColor=white"/></a>
<a href="https://discord.gg/TYU2fdJykW" target="_blank"><img alt="Discord" src="https://img.shields.io/badge/Discord-Kimi.ai-white?logo=discord&logoColor=white"/></a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/moonshotai/Kimi-K2/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/badge/License-Modified_MIT-f5de53?&color=f5de53"/></a>
</div>
<p align="center">
<b>📰 <a href="https://moonshotai.github.io/Kimi-K2/">Tech Blog</a></b> | <b>📄 Paper Link (coming soon)</b>
</p>
## 0. Changelog
### 2025.7.15
- We have updated our tokenizer implementation. Now special tokens like `[EOS]` can be encoded to their token ids.
- We fixed a bug in the chat template that was breaking multi-turn tool calls.
## 1. Model Introduction
Kimi K2 is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Trained with the Muon optimizer, Kimi K2 achieves exceptional performance across frontier knowledge, reasoning, and coding tasks while being meticulously optimized for agentic capabilities.
### Key Features
- Large-Scale Training: Pre-trained a 1T parameter MoE model on 15.5T tokens with zero training instability.
- MuonClip Optimizer: We apply the Muon optimizer to an unprecedented scale, and develop novel optimization techniques to resolve instabilities while scaling up.
- Agentic Intelligence: Specifically designed for tool use, reasoning, and autonomous problem-solving.
### Model Variants
- **Kimi-K2-Base**: The foundation model, a strong start for researchers and builders who want full control for fine-tuning and custom solutions.
- **Kimi-K2-Instruct**: The post-trained model best for drop-in, general-purpose chat and agentic experiences. It is a reflex-grade model without long thinking.
<div align="center">
<picture>
<img src="figures/banner.png" width="80%" alt="Evaluation Results">
</picture>
</div>
## 2. Model Summary
<div align="center">
| | |
|:---:|:---:|
| **Architecture** | Mixture-of-Experts (MoE) |
| **Total Parameters** | 1T |
| **Activated Parameters** | 32B |
| **Number of Layers** (Dense layer included) | 61 |
| **Number of Dense Layers** | 1 |
| **Attention Hidden Dimension** | 7168 |
| **MoE Hidden Dimension** (per Expert) | 2048 |
| **Number of Attention Heads** | 64 |
| **Number of Experts** | 384 |
| **Selected Experts per Token** | 8 |
| **Number of Shared Experts** | 1 |
| **Vocabulary Size** | 160K |
| **Context Length** | 128K |
| **Attention Mechanism** | MLA |
| **Activation Function** | SwiGLU |
</div>
## 3. Evaluation Results
#### Instruction model evaluation results
<div align="center">
<table>
<thead>
<tr>
<th align="center">Benchmark</th>
<th align="center">Metric</th>
<th align="center"><sup>Kimi K2 Instruct</sup></th>
<th align="center"><sup>DeepSeek-V3-0324</sup></th>
<th align="center"><sup>Qwen3-235B-A22B <br><sup>(non-thinking)</sup></sup></th>
<th align="center"><sup>Claude Sonnet 4 <br><sup>(w/o extended thinking)</sup></sup></th>
<th align="center"><sup>Claude Opus 4 <br><sup>(w/o extended thinking)</sup></sup></th>
<th align="center"><sup>GPT-4.1</sup></th>
<th align="center"><sup>Gemini 2.5 Flash <br> Preview (05-20)</sup></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" colspan=9><strong>Coding Tasks</strong></td>
</tr>
<tr>
<td align="center">LiveCodeBench v6<br><sup>(Aug 24 - May 25)</sup></td>
<td align="center">Pass@1</td>
<td align="center"><strong>53.7</strong></td>
<td align="center">46.9</td>
<td align="center">37.0</td>
<td align="center">48.5</td>
<td align="center">47.4</td>
<td align="center">44.7</td>
<td align="center">44.7</td>
</tr>
<tr>
<td align="center">OJBench</td>
<td align="center">Pass@1</td>
<td align="center"><strong>27.1</strong></td>
<td align="center">24.0</td>
<td align="center">11.3</td>
<td align="center">15.3</td>
<td align="center">19.6</td>
<td align="center">19.5</td>
<td align="center">19.5</td>
</tr>
<tr>
<td align="center">MultiPL-E</td>
<td align="center">Pass@1</td>
<td align="center"><ins><strong>85.7</strong></ins></td>
<td align="center">83.1</td>
<td align="center">78.2</td>
<td align="center">88.6</td>
<td align="center"><strong>89.6</strong></td>
<td align="center">86.7</td>
<td align="center">85.6</td>
</tr>
<tr>
<td align="center">SWE-bench Verified <br/><sup>(Agentless Coding)</sup></td>
<td align="center">Single Patch w/o Test (Acc)</td>
<td align="center"><ins><strong>51.8</strong></ins></td>
<td align="center">36.6</td>
<td align="center">39.4</td>
<td align="center">50.2</td>
<td align="center"><strong>53.0</strong></td>
<td align="center">40.8</td>
<td align="center">32.6</td>
</tr>
<tr>
<td align="center" rowspan="2">SWE-bench Verified <br/> <sup>(Agentic Coding)</sup></td>
<td align="center">Single Attempt (Acc)</td>
<td align="center"><ins><strong>65.8</strong></ins></td>
<td align="center">38.8</td>
<td align="center">34.4</td>
<td align="center"><strong>72.7</strong><sup>*</sup></td>
<td align="center">72.5<sup>*</sup></td>
<td align="center">54.6</td>
<td align="center">—</td>
</tr>
<tr>
<!--<td align="center">(Agentic Coding)</td>-->
<td align="center">Multiple Attempts (Acc)</td>
<td align="center"><ins><strong>71.6</strong></ins></td>
<td align="center">—</td>
<td align="center">—</td>
<td align="center"><strong>80.2</strong></td>
<td align="center">79.4<sup>*</sup></td>
<td align="center">—</td>
<td align="center">—</td>
</tr>
<tr>
<td align="center">SWE-bench Multilingual<br /> <sup>(Agentic Coding)</sup></td>
<td align="center">Single Attempt (Acc)</td>
<td align="center"><ins><strong>47.3</strong> </ins></td>
<td align="center">25.8</td>
<td align="center">20.9</td>
<td align="center"><strong>51.0</strong></td>
<td align="center">—</td>
<td align="center">31.5</td>
<td align="center">—</td>
</tr>
<tr>
<td align="center" rowspan="2">TerminalBench</td>
<td align="center">Inhouse Framework (Acc)</td>
<td align="center"><ins><strong>30.0</strong></ins></td>
<td align="center">—</td>
<td align="center">—</td>
<td align="center">35.5</td>
<td align="center"><strong>43.2</strong></td>
<td align="center">8.3</td>
<td align="center">—</td>
</tr>
<tr>
<!--<td align="center">TerminalBench</td>-->
<td align="center">Terminus (Acc)</td>
<td align="center"><ins><strong>25.0</strong> </ins></td>
<td align="center">16.3</td>
<td align="center">6.6</td>
<td align="center">—</td>
<td align="center">—</td>
<td align="center"><strong>30.3</strong></td>
<td align="center">16.8</td>
</tr>
<tr>
<td align="center">Aider-Polyglot</td>
<td align="center">Acc</td>
<td align="center">60.0</td>
<td align="center">55.1</td>
<td align="center"><ins><strong>61.8</strong></ins></td>
<td align="center">56.4</td>
<td align="center"><strong>70.7</strong></td>
<td align="center">52.4</td>
<td align="center">44.0</td>
</tr>
<tr>
<td align="center" colspan=9><strong>Tool Use Tasks</strong></td>
</tr>
<tr>
<td align="center">Tau2 retail</td>
<td align="center">Avg@4</td>
<td align="center"><ins><strong>70.6</strong></ins></td>
<td align="center">69.1</td>
<td align="center">57.0</td>
<td align="center">75.0</td>
<td align="center"><strong>81.8</strong></td>
<td align="center">74.8</td>
<td align="center">64.3</td>
</tr>
<tr>
<td align="center">Tau2 airline</td>
<td align="center">Avg@4</td>
<td align="center"><ins><strong>56.5</strong></ins></td>
<td align="center">39.0</td>
<td align="center">26.5</td>
<td align="center">55.5</td>
<td align="center"><strong>60.0</strong></td>
<td align="center">54.5</td>
<td align="center">42.5</td>
</tr>
<tr>
<td align="center">Tau2 telecom</td>
<td align="center">Avg@4</td>
<td align="center"><strong>65.8</strong></td>
<td align="center">32.5</td>
<td align="center">22.1</td>
<td align="center">45.2</td>
<td align="center">57.0</td>
<td align="center">38.6</td>
<td align="center">16.9</td>
</tr>
<tr>
<td align="center">AceBench</td>
<td align="center">Acc</td>
<td align="center"><ins><strong>76.5</strong></ins></td>
<td align="center">72.7</td>
<td align="center">70.5</td>
<td align="center">76.2</td>
<td align="center">75.6</td>
<td align="center"><strong>80.1</strong></td>
<td align="center">74.5</td>
</tr>
<tr>
<td align="center" colspan=9><strong>Math & STEM Tasks</strong></td>
</tr>
<tr>
<td align="center">AIME 2024</td>
<td align="center">Avg@64</td>
<td align="center"><strong>69.6</strong></td>
<td align="center">59.4<sup>*</sup></td>
<td align="center">40.1<sup>*</sup></td>
<td align="center">43.4</td>
<td align="center">48.2</td>
<td align="center">46.5</td>
<td align="center">61.3</td>
</tr>
<tr>
<td align="center">AIME 2025</td>
<td align="center">Avg@64</td>
<td align="center"><strong>49.5</strong></td>
<td align="center">46.7</td>
<td align="center">24.7<sup>*</sup></td>
<td align="center">33.1<sup>*</sup></td>
<td align="center">33.9<sup>*</sup></td>
<td align="center">37.0</td>
<td align="center">46.6</td>
</tr>
<tr>
<td align="center">MATH-500</td>
<td align="center">Acc</td>
<td align="center"><strong>97.4</strong></td>
<td align="center">94.0<sup>*</sup></td>
<td align="center">91.2<sup>*</sup></td>
<td align="center">94.0</td>
<td align="center">94.4</td>
<td align="center">92.4</td>
<td align="center">95.4</td>
</tr>
<tr>
<td align="center">HMMT 2025</td>
<td align="center">Avg@32</td>
<td align="center"><strong>38.8</strong></td>
<td align="center">27.5</td>
<td align="center">11.9</td>
<td align="center">15.9</td>
<td align="center">15.9</td>
<td align="center">19.4</td>
<td align="center">34.7</td>
</tr>
<tr>
<td align="center">CNMO 2024</td>
<td align="center">Avg@16</td>
<td align="center">74.3</td>
<td align="center"><ins><strong>74.7</strong></ins></td>
<td align="center">48.6</td>
<td align="center">60.4</td>
<td align="center">57.6</td>
<td align="center">56.6</td>
<td align="center"><strong>75.0</strong></td>
</tr>
<tr>
<td align="center">PolyMath-en</td>
<td align="center">Avg@4</td>
<td align="center"><strong>65.1</strong></td>
<td align="center">59.5</td>
<td align="center">51.9</td>
<td align="center">52.8</td>
<td align="center">49.8</td>
<td align="center">54.0</td>
<td align="center">49.9</td>
</tr>
<tr>
<td align="center">ZebraLogic</td>
<td align="center">Acc</td>
<td align="center"><strong>89.0</strong></td>
<td align="center">84.0</td>
<td align="center">37.7<sup>*</sup></td>
<td align="center">73.7</td>
<td align="center">59.3</td>
<td align="center">58.5</td>
<td align="center">57.9</td>
</tr>
<tr>
<td align="center">AutoLogi</td>
<td align="center">Acc</td>
<td align="center"><ins><strong>89.5</strong></ins></td>
<td align="center">88.9</td>
<td align="center">83.3</td>
<td align="center"><strong>89.8</strong></td>
<td align="center">86.1</td>
<td align="center">88.2</td>
<td align="center">84.1</td>
</tr>
<tr>
<td align="center">GPQA-Diamond</td>
<td align="center">Avg@8</td>
<td align="center"><strong>75.1</strong></td>
<td align="center">68.4<sup>*</sup></td>
<td align="center">62.9<sup>*</sup></td>
<td align="center">70.0<sup>*</sup></td>
<td align="center">74.9<sup>*</sup></td>
<td align="center">66.3</td>
<td align="center">68.2</td>
</tr>
<tr>
<td align="center">SuperGPQA</td>
<td align="center">Acc</td>
<td align="center"><strong>57.2</strong></td>
<td align="center">53.7</td>
<td align="center">50.2</td>
<td align="center">55.7</td>
<td align="center">56.5</td>
<td align="center">50.8</td>
<td align="center">49.6</td>
</tr>
<tr>
<td align="center">Humanity's Last Exam<br><sup>(Text Only)</sup></td>
<td align="center">-</td>
<td align="center">4.7</td>
<td align="center">5.2</td>
<td align="center"><ins><strong>5.7</strong></ins></td>
<td align="center">5.8</td>
<td align="center"><strong>7.1</strong></td>
<td align="center">3.7</td>
<td align="center">5.6</td>
</tr>
<tr>
<td align="center" colspan=9><strong>General Tasks</strong></td>
</tr>
<tr>
<td align="center">MMLU</td>
<td align="center">EM</td>
<td align="center"><ins><strong>89.5</strong></ins></td>
<td align="center">89.4</td>
<td align="center">87.0</td>
<td align="center">91.5</td>
<td align="center"><strong>92.9</strong></td>
<td align="center">90.4</td>
<td align="center">90.1</td>
</tr>
<tr>
<td align="center">MMLU-Redux</td>
<td align="center">EM</td>
<td align="center"><ins><strong>92.7</strong></ins></td>
<td align="center">90.5</td>
<td align="center">89.2</td>
<td align="center">93.6</td>
<td align="center"><strong>94.2</strong></td>
<td align="center">92.4</td>
<td align="center">90.6</td>
</tr>
<tr>
<td align="center">MMLU-Pro</td>
<td align="center">EM</td>
<td align="center">81.1</td>
<td align="center"><ins><strong>81.2</strong></ins><sup>*</sup></td>
<td align="center">77.3</td>
<td align="center">83.7</td>
<td align="center"><strong>86.6</strong></td>
<td align="center">81.8</td>
<td align="center">79.4</td>
</tr>
<tr>
<td align="center">IFEval</td>
<td align="center">Prompt Strict</td>
<td align="center"><strong>89.8</strong></td>
<td align="center">81.1</td>
<td align="center">83.2<sup>*</sup></td>
<td align="center">87.6</td>
<td align="center">87.4</td>
<td align="center">88.0</td>
<td align="center">84.3</td>
</tr>
<tr>
<td align="center">Multi-Challenge</td>
<td align="center">Acc</td>
<td align="center"><strong>54.1</strong></td>
<td align="center">31.4</td>
<td align="center">34.0</td>
<td align="center">46.8</td>
<td align="center">49.0</td>
<td align="center">36.4</td>
<td align="center">39.5</td>
</tr>
<tr>
<td align="center">SimpleQA</td>
<td align="center">Correct</td>
<td align="center"><ins><strong>31.0</strong></ins></td>
<td align="center">27.7</td>
<td align="center">13.2</td>
<td align="center">15.9</td>
<td align="center">22.8</td>
<td align="center"><strong>42.3</strong></td>
<td align="center">23.3</td>
</tr>
<tr>
<td align="center">Livebench</td>
<td align="center">Pass@1</td>
<td align="center"><strong>76.4</strong></td>
<td align="center">72.4</td>
<td align="center">67.6</td>
<td align="center">74.8</td>
<td align="center">74.6</td>
<td align="center">69.8</td>
<td align="center">67.8</td>
</tr>
</tbody>
</table>
</div>
<sup>
• Bold denotes global SOTA, and underlined denotes open-source SOTA.
</sup><br/><sup>
• Data points marked with * are taken directly from the model's tech report or blog.
</sup><br/><sup>
• All metrics, except for SWE-bench Verified (Agentless), are evaluated with an 8k output token length. SWE-bench Verified (Agentless) is limited to a 16k output token length.
</sup><br/><sup>
• Kimi K2 achieves 65.8% pass@1 on the SWE-bench Verified tests with bash/editor tools (single-attempt patches, no test-time compute). It also achieves a 47.3% pass@1 on the SWE-bench Multilingual tests under the same conditions. Additionally, we report results on SWE-bench Verified tests (71.6%) that leverage parallel test-time compute by sampling multiple sequences and selecting the single best via an internal scoring model.
</sup><br/><sup>
• To ensure the stability of the evaluation, we employed avg@k on the AIME, HMMT, CNMO, PolyMath-en, GPQA-Diamond, EvalPlus, Tau2.
</sup><br/><sup>
• Some data points have been omitted due to prohibitively expensive evaluation costs.
</sup>
---
#### Base model evaluation results
<div align="center">
<table>
<thead>
<tr>
<th align="center">Benchmark</th>
<th align="center">Metric</th>
<th align="center">Shot</th>
<th align="center">Kimi K2 Base</th>
<th align="center">Deepseek-V3-Base</th>
<th align="center">Qwen2.5-72B</th>
<th align="center">Llama 4 Maverick</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" colspan="7"><strong>General Tasks</strong></td>
</tr>
<tr>
<td align="center">MMLU</td>
<td align="center">EM</td>
<td align="center">5-shot</td>
<td align="center"><strong>87.8</strong></td>
<td align="center">87.1</td>
<td align="center">86.1</td>
<td align="center">84.9</td>
</tr>
<tr>
<td align="center">MMLU-pro</td>
<td align="center">EM</td>
<td align="center">5-shot</td>
<td align="center"><strong>69.2</strong></td>
<td align="center">60.6</td>
<td align="center">62.8</td>
<td align="center">63.5</td>
</tr>
<tr>
<td align="center">MMLU-redux-2.0</td>
<td align="center">EM</td>
<td align="center">5-shot</td>
<td align="center"><strong>90.2</strong></td>
<td align="center">89.5</td>
<td align="center">87.8</td>
<td align="center">88.2</td>
</tr>
<tr>
<td align="center">SimpleQA</td>
<td align="center">Correct</td>
<td align="center">5-shot</td>
<td align="center"><strong>35.3</strong></td>
<td align="center">26.5</td>
<td align="center">10.3</td>
<td align="center">23.7</td>
</tr>
<tr>
<td align="center">TriviaQA</td>
<td align="center">EM</td>
<td align="center">5-shot</td>
<td align="center"><strong>85.1</strong></td>
<td align="center">84.1</td>
<td align="center">76.0</td>
<td align="center">79.3</td>
</tr>
<tr>
<td align="center">GPQA-Diamond</td>
<td align="center">Avg@8</td>
<td align="center">5-shot</td>
<td align="center">48.1</td>
<td align="center"><strong>50.5</strong></td>
<td align="center">40.8</td>
<td align="center">49.4</td>
</tr>
<tr>
<td align="center">SuperGPQA</td>
<td align="center">EM</td>
<td align="center">5-shot</td>
<td align="center"><strong>44.7</strong></td>
<td align="center">39.2</td>
<td align="center">34.2</td>
<td align="center">38.8</td>
</tr>
<tr>
<td align="center" colspan="7"><strong>Coding Tasks</strong></td>
</tr>
<tr>
<td align="center">LiveCodeBench v6</td>
<td align="center">Pass@1</td>
<td align="center">1-shot</td>
<td align="center"><strong>26.3</strong></td>
<td align="center">22.9</td>
<td align="center">21.1</td>
<td align="center">25.1</td>
</tr>
<tr>
<td align="center">EvalPlus</td>
<td align="center">Pass@1</td>
<td align="center">-</td>
<td align="center"><strong>80.3</strong></td>
<td align="center">65.6</td>
<td align="center">66.0</td>
<td align="center">65.5</td>
</tr>
<tr>
<td align="center" colspan="7"><strong>Mathematics Tasks</strong></td>
</tr>
<tr>
<td align="center">MATH</td>
<td align="center">EM</td>
<td align="center">4-shot</td>
<td align="center"><strong>70.2</strong></td>
<td align="center">60.1</td>
<td align="center">61.0</td>
<td align="center">63.0</td>
</tr>
<tr>
<td align="center">GSM8k</td>
<td align="center">EM</td>
<td align="center">8-shot</td>
<td align="center"><strong>92.1</strong></td>
<td align="center">91.7</td>
<td align="center">90.4</td>
<td align="center">86.3</td>
</tr>
<tr>
<td align="center" colspan="7"><strong>Chinese Tasks</strong></td>
</tr>
<tr>
<td align="center">C-Eval</td>
<td align="center">EM</td>
<td align="center">5-shot</td>
<td align="center"><strong>92.5</strong></td>
<td align="center">90.0</td>
<td align="center">90.9</td>
<td align="center">80.9</td>
</tr>
<tr>
<td align="center">CSimpleQA</td>
<td align="center">Correct</td>
<td align="center">5-shot</td>
<td align="center"><strong>77.6</strong></td>
<td align="center">72.1</td>
<td align="center">50.5</td>
<td align="center">53.5</td>
</tr>
</tbody>
</table>
</div>
<sup>
• We only evaluate open-source pretrained models in this work. We report results for Qwen2.5-72B because the base checkpoint for Qwen3-235B-A22B was not open-sourced at the time of our study.
</sup><br/><sup>
• All models are evaluated using the same evaluation protocol.
</sup>
## 4. Deployment
> [!Note]
> You can access Kimi K2's API on https://platform.moonshot.ai , we provide OpenAI/Anthropic-compatible API for you.
>
> The Anthropic-compatible API maps temperature by `real_temperature = request_temperature * 0.6` for better compatible with existing applications.
Our model checkpoints are stored in the block-fp8 format, you can find it on [Huggingface](https://huggingface.co/moonshotai/Kimi-K2-Instruct).
Currently, Kimi-K2 is recommended to run on the following inference engines:
* vLLM
* SGLang
* KTransformers
* TensorRT-LLM
Deployment examples for vLLM and SGLang can be found in the [Model Deployment Guide](docs/deploy_guidance.md).
---
## 5. Model Usage
### Chat Completion
Once the local inference service is up, you can interact with it through the chat endpoint:
```python
def simple_chat(client: OpenAI, model_name: str):
messages = [
{"role": "system", "content": "You are Kimi, an AI assistant created by Moonshot AI."},
{"role": "user", "content": [{"type": "text", "text": "Please give a brief self-introduction."}]},
]
response = client.chat.completions.create(
model=model_name,
messages=messages,
stream=False,
temperature=0.6,
max_tokens=256
)
print(response.choices[0].message.content)
```
> [!NOTE]
> The recommended temperature for Kimi-K2-Instruct is `temperature = 0.6`.
> If no special instructions are required, the system prompt above is a good default.
---
### Tool Calling
Kimi-K2-Instruct has strong tool-calling capabilities.
To enable them, you need to pass the list of available tools in each request, then the model will autonomously decide when and how to invoke them.
The following example demonstrates calling a weather tool end-to-end:
```python
# Your tool implementation
def get_weather(city: str) -> dict:
return {"weather": "Sunny"}
# Tool schema definition
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Retrieve current weather information. Call this when the user asks about the weather.",
"parameters": {
"type": "object",
"required": ["city"],
"properties": {
"city": {
"type": "string",
"description": "Name of the city"
}
}
}
}
}]
# Map tool names to their implementations
tool_map = {
"get_weather": get_weather
}
def tool_call_with_client(client: OpenAI, model_name: str):
messages = [
{"role": "system", "content": "You are Kimi, an AI assistant created by Moonshot AI."},
{"role": "user", "content": "What's the weather like in Beijing today? Use the tool to check."}
]
finish_reason = None
while finish_reason is None or finish_reason == "tool_calls":
completion = client.chat.completions.create(
model=model_name,
messages=messages,
temperature=0.6,
tools=tools, # tool list defined above
tool_choice="auto"
)
choice = completion.choices[0]
finish_reason = choice.finish_reason
if finish_reason == "tool_calls":
messages.append(choice.message)
for tool_call in choice.message.tool_calls:
tool_call_name = tool_call.function.name
tool_call_arguments = json.loads(tool_call.function.arguments)
tool_function = tool_map[tool_call_name]
tool_result = tool_function(**tool_call_arguments)
print("tool_result:", tool_result)
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"name": tool_call_name,
"content": json.dumps(tool_result)
})
print("-" * 100)
print(choice.message.content)
```
The `tool_call_with_client` function implements the pipeline from user query to tool execution.
This pipeline requires the inference engine to support Kimi-K2’s native tool-parsing logic.
For streaming output and manual tool-parsing, see the [Tool Calling Guide](docs/tool_call_guidance.md).
---
## 6. License
Both the code repository and the model weights are released under the [Modified MIT License](LICENSE).
---
## 7. Third Party Notices
See [THIRD PARTY NOTICES](THIRD_PARTY_NOTICES.md)
---
## 7. Contact Us
If you have any questions, please reach out at [[email protected]](mailto:[email protected]).
|
shikderazriel6453/blockassist-bc-burrowing_thorny_gibbon_1757603318
|
shikderazriel6453
| 2025-09-11T15:08:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"burrowing thorny gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:08:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- burrowing thorny gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rodriquezb087/blockassist-bc-dormant_pensive_cat_1757603318
|
rodriquezb087
| 2025-09-11T15:08:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"burrowing thorny gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:08:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- burrowing thorny gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tagirarega/blockassist-bc-tricky_aquatic_piranha_1757603292
|
tagirarega
| 2025-09-11T15:08:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"graceful hulking lemur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:08:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- graceful hulking lemur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
oxleybranan/blockassist-bc-amphibious_tricky_platypus_1757603259
|
oxleybranan
| 2025-09-11T15:07:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious tricky platypus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:07:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious tricky platypus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yesniorka/blockassist-bc-stocky_large_dove_1757603261
|
yesniorka
| 2025-09-11T15:07:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious tricky platypus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:07:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious tricky platypus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kavpro/blockassist
|
kavpro
| 2025-09-11T15:07:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall lively caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:53:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall lively caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cactus-S/blockassist
|
cactus-S
| 2025-09-11T15:07:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive arctic panther",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T07:49:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive arctic panther
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
radlab/semantic-euro-bert-encoder-v1
|
radlab
| 2025-09-11T15:07:14Z | 20 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"eurobert",
"- embeddings",
"plwordnet",
"semantic-relations",
"semantic-search",
"sentence-similarity",
"custom_code",
"pl",
"en",
"de",
"base_model:EuroBERT/EuroBERT-610m",
"base_model:finetune:EuroBERT/EuroBERT-610m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-26T23:36:02Z |
---
license: apache-2.0
language:
- pl
- en
- de
base_model:
- EuroBERT/EuroBERT-610m
tags:
- sentence-transformers
- '- embeddings'
- plwordnet
- semantic-relations
- semantic-search
pipeline_tag: sentence-similarity
---
# PLWordNet Semantic Embedder (bi-encoder)
A Polish semantic embedder trained on pairs constructed from plWordNet (Słowosieć) semantic relations and external descriptions of meanings.
Every relation between lexical units and synsets is transformed into training/evaluation examples.
The dataset mixes meanings’ usage signals: emotions, definitions, and external descriptions (Wikipedia, sentence-split).
The embedder mimics semantic relations: it pulls together embeddings that are linked by “positive” relations
(e.g., synonymy, hypernymy/hyponymy as defined in the dataset) and pushes apart embeddings linked by “negative”
relations (e.g., antonymy or mutually exclusive relations). Source code and training scripts:
- GitHub: [https://github.com/radlab-dev-group/radlab-plwordnet](https://github.com/radlab-dev-group/radlab-plwordnet)
## Model summary
- **Architecture**: bi-encoder built with `sentence-transformers` (transformer encoder + pooling).
- **Use cases**: semantic similarity and semantic search for Polish words, senses, definitions, and sentences.
- **Objective**: CosineSimilarityLoss on positive/negative pairs.
- **Behavior**: preserves the topology of semantic relations derived from plWordNet.
## Training data
Constructed from plWordNet relations between lexical units and synsets; each relation yields example pairs.
Augmented with:
- definitions,
- usage examples (including emotion annotations where available),
- external descriptions from Wikipedia (split into sentences).
Positive pairs correspond to relations expected to increase similarity;
negative pairs correspond to relations expected to decrease similarity.
Additional hard/soft negatives may include unrelated meanings.
## Training details
- **Trainer**: `SentenceTransformerTrainer`
- **Loss**: `CosineSimilarityLoss`
- **Evaluator**: `EmbeddingSimilarityEvaluator` (cosine)
- Typical **hyperparameters**:
- epochs: 5
- per-device batch size: 10 (gradient accumulation: 4)
- learning rate: 5e-6 (AdamW fused)
- weight decay: 0.01
- warmup: ratio 20k steps
- fp16: true
## Evaluation
- **Task**: semantic similarity on dev/test splits built from the relation-derived pairs.
- **Metric**: cosine-based correlation (Spearman/Pearson) where applicable, or discrimination between positive vs. negative pairs.



## How to use
Sentence-Transformers:
``` python
# Python
from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer("radlab/semantic-euro-bert-encoder-v1", trust_remote_code=True)
texts = ["zamek", "drzwi", "wiadro", "horyzont", "ocean"]
emb = model.encode(texts, convert_to_tensor=True, normalize_embeddings=True)
scores = util.cos_sim(emb, emb)
print(scores) # higher = more semantically similar
```
Transformers (feature extraction):
``` python
# Python
from transformers import AutoModel, AutoTokenizer
import torch
import torch.nn.functional as F
name = "radlab/semantic-euro-bert-encoder-v1"
tok = AutoTokenizer.from_pretrained(name)
mdl = AutoModel.from_pretrained(name, trust_remote_code=True)
texts = ["student", "żak"]
tokens = tok(texts, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
out = mdl(**tokens)
emb = out.last_hidden_state.mean(dim=1)
emb = F.normalize(emb, p=2, dim=1)
sim = emb @ emb.T
print(sim)
```
|
sekirr/blockassist-bc-masked_tenacious_whale_1757603174
|
sekirr
| 2025-09-11T15:06:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:06:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ilqarkazijdmzad/blockassist-bc-giant_arctic_swan_1757603195
|
ilqarkazijdmzad
| 2025-09-11T15:06:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"giant arctic swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:06:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- giant arctic swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
oyshimimi50/blockassist-bc-alert_colorful_pigeon_1757603190
|
oyshimimi50
| 2025-09-11T15:06:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert colorful pigeon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:06:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert colorful pigeon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arabellamorris/blockassist-bc-tricky_sneaky_locust_1757603086
|
arabellamorris
| 2025-09-11T15:05:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tricky sneaky locust",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:05:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tricky sneaky locust
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
iyaadshikder1546/blockassist-bc-pensive_agile_bee_1757603124
|
iyaadshikder1546
| 2025-09-11T15:05:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pensive agile bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:05:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pensive agile bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
harmonyblevinsm0/blockassist-bc-silent_miniature_monkey_1757602975
|
harmonyblevinsm0
| 2025-09-11T15:04:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silent miniature monkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:03:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silent miniature monkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
raskbxicnusray/blockassist-bc-stealthy_lithe_wildebeest_1757603023
|
raskbxicnusray
| 2025-09-11T15:03:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy lithe wildebeest",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:03:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy lithe wildebeest
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_cola_123_1757596071
|
rbelanec
| 2025-09-11T15:03:25Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T13:12:56Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_cola_123_1757596071
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_123_1757596071
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9521
- Num Input Tokens Seen: 6929680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.1268 | 1.0 | 3848 | 0.2820 | 346872 |
| 0.3132 | 2.0 | 7696 | 0.2417 | 693752 |
| 0.2179 | 3.0 | 11544 | 0.2405 | 1040128 |
| 0.2649 | 4.0 | 15392 | 0.2411 | 1386696 |
| 0.2187 | 5.0 | 19240 | 0.2434 | 1733072 |
| 0.1872 | 6.0 | 23088 | 0.2394 | 2079640 |
| 0.2849 | 7.0 | 26936 | 0.2419 | 2425920 |
| 0.1858 | 8.0 | 30784 | 0.2366 | 2772144 |
| 0.2726 | 9.0 | 34632 | 0.2393 | 3118472 |
| 0.2241 | 10.0 | 38480 | 0.2438 | 3465288 |
| 0.2284 | 11.0 | 42328 | 0.2862 | 3811696 |
| 0.0849 | 12.0 | 46176 | 0.2743 | 4158168 |
| 0.1104 | 13.0 | 50024 | 0.3264 | 4504416 |
| 0.1854 | 14.0 | 53872 | 0.3800 | 4850888 |
| 0.1511 | 15.0 | 57720 | 0.4422 | 5197456 |
| 0.0483 | 16.0 | 61568 | 0.5154 | 5543848 |
| 0.1082 | 17.0 | 65416 | 0.6811 | 5890320 |
| 0.2789 | 18.0 | 69264 | 0.7981 | 6237200 |
| 0.3151 | 19.0 | 73112 | 0.9202 | 6583408 |
| 0.0006 | 20.0 | 76960 | 0.9521 | 6929680 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1757602826
|
cwayneconnor
| 2025-09-11T15:02:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:01:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ganswiltzblack/blockassist-bc-nocturnal_humming_badger_1757602959
|
ganswiltzblack
| 2025-09-11T15:02:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"nocturnal humming badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:02:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- nocturnal humming badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
taniyatoha637/blockassist-bc-eager_flapping_anaconda_1757602954
|
taniyatoha637
| 2025-09-11T15:02:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"eager flapping anaconda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:02:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- eager flapping anaconda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-insectivorous_bold_lion_1757602887
|
omerbkts
| 2025-09-11T15:02:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:01:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yesniorka/blockassist-bc-stocky_large_dove_1757602929
|
yesniorka
| 2025-09-11T15:02:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stocky large dove",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:02:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stocky large dove
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seams01/blockassist
|
seams01
| 2025-09-11T15:02:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous stubby snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T07:28:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous stubby snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_cola_42_1757596047
|
rbelanec
| 2025-09-11T15:01:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T13:08:17Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_cola_42_1757596047
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_42_1757596047
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2412
- Num Input Tokens Seen: 6927000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.2546 | 1.0 | 3848 | 0.2480 | 346040 |
| 0.1205 | 2.0 | 7696 | 0.2484 | 692368 |
| 0.2615 | 3.0 | 11544 | 0.2438 | 1039080 |
| 0.2572 | 4.0 | 15392 | 0.2436 | 1385192 |
| 0.2552 | 5.0 | 19240 | 0.2432 | 1731824 |
| 0.3358 | 6.0 | 23088 | 0.2496 | 2078408 |
| 0.2235 | 7.0 | 26936 | 0.2438 | 2424592 |
| 0.2903 | 8.0 | 30784 | 0.2476 | 2770768 |
| 0.2715 | 9.0 | 34632 | 0.2459 | 3117120 |
| 0.2141 | 10.0 | 38480 | 0.2748 | 3463336 |
| 0.2359 | 11.0 | 42328 | 0.2426 | 3809536 |
| 0.316 | 12.0 | 46176 | 0.2439 | 4155688 |
| 0.3199 | 13.0 | 50024 | 0.2455 | 4502336 |
| 0.2547 | 14.0 | 53872 | 0.2459 | 4848864 |
| 0.2146 | 15.0 | 57720 | 0.2422 | 5194640 |
| 0.3529 | 16.0 | 61568 | 0.2419 | 5541160 |
| 0.2237 | 17.0 | 65416 | 0.2437 | 5887864 |
| 0.3058 | 18.0 | 69264 | 0.2429 | 6234216 |
| 0.2963 | 19.0 | 73112 | 0.2419 | 6580528 |
| 0.3099 | 20.0 | 76960 | 0.2412 | 6927000 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Miracle-man/blockassist
|
Miracle-man
| 2025-09-11T15:01:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing lithe koala",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:52:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing lithe koala
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
schnecklothheath/blockassist-bc-soaring_leaping_snake_1757602864
|
schnecklothheath
| 2025-09-11T15:01:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soaring leaping snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:01:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soaring leaping snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amannammaka/blockassist-bc-feathered_meek_kangaroo_1757602835
|
amannammaka
| 2025-09-11T15:00:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"feathered meek kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:00:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- feathered meek kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BnSa3d/nutriieee
|
BnSa3d
| 2025-09-11T15:00:22Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-11T14:40:07Z |
---
license: apache-2.0
---
|
milfordprudence/blockassist-bc-aquatic_reclusive_cassowary_1757602806
|
milfordprudence
| 2025-09-11T15:00:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering hairy woodpecker",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:00:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering hairy woodpecker
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
goshujaieja/blockassist-bc-untamed_armored_ram_1757602778
|
goshujaieja
| 2025-09-11T14:59:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed armored ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:59:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed armored ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fuckSelf/GPT-SoVITS-Russian
|
fuckSelf
| 2025-09-11T14:59:49Z | 0 | 0 | null |
[
"GPT-SoVITS",
"Russian",
"text-to-speech",
"ru",
"zh",
"base_model:lj1995/GPT-SoVITS",
"base_model:finetune:lj1995/GPT-SoVITS",
"license:mit",
"region:us"
] |
text-to-speech
| 2025-09-11T13:46:11Z |
---
license: mit
language:
- ru
- zh
base_model:
- lj1995/GPT-SoVITS
pipeline_tag: text-to-speech
tags:
- GPT-SoVITS
- Russian
---
# 基于GPT-SoVITS微操后训练的俄语-中文模型
模型基于[GPT-SoVITS](https://github.com/RVC-Boss/GPT-SoVITS/wiki/%E8%AE%AD%E7%BB%83%E6%96%B0%E8%AF%AD%E8%A8%80(how-to-train-the-models-with-other-languages))对底模微操后使用288小时俄语和10小时汉语数据集训练而成。
GPT模型训练50个epoch,Sovits模型训练12个epoch。简单手动试听后,最佳组合(俄语->俄语、汉语->汉语、汉语->俄语)(俄语->汉语有问题,都是空白音频)是gpt-20epoch、sovits-10epoch
## 文件说明
s2Gv2ProPlus-rus.pth: 由 GPT_SoVITS/pretrained_models/v2Pro/s2Gv2ProPlus.pth 微操而来,放到 GPT_SoVITS/pretrained_models/v2Pro/ 下
s1v3-ru.ckpt: 由 GPT_SoVITS/pretrained_models/s1v3.ckpt 微操而来,放到 GPT_SoVITS/pretrained_models/ 下
(记得将代码中使用原始两个底模的地方全局替换为新的两个底模)
ru-base_e10_s123570.pth : 放到 GPT_weights_v2ProPlus 下
ru-base-e20.ckpt : 放到 SoVITS_weights_v2ProPlus 下
微操脚本:
```
import torch
## 迁移gpt模型的embedding层,插入两个新的token
gpt_dict = torch.load(r"GPT_SoVITS/pretrained_models/s1v3.ckpt",map_location=torch.device('cpu'))
gpt_shape = gpt_dict["weight"]["model.ar_text_embedding.word_embeddings.weight"].shape
print(gpt_shape)
# 一共512列,embedding_dim=512
first_part = gpt_dict["weight"]["model.ar_text_embedding.word_embeddings.weight"][:,:]
new_weight = torch.cat([first_part, torch.randn(21, 512)],dim=0) # 由于embedding_dim=512,len(rus_symbols)=21,所以需要增加21列
gpt_dict["weight"]["model.ar_text_embedding.word_embeddings.weight"] = new_weight
gpt_dict["config"]["model"]["phoneme_vocab_size"]=753 # 原本是732,增加21个符号
torch.save(gpt_dict,r"GPT_SoVITS/pretrained_models/s1v3-ru.ckpt") # 保存新的模型
import torch
sovits_dict = torch.load(r"GPT_SoVITS/pretrained_models/v2Pro/s2Gv2ProPlus.pth",map_location=torch.device('cpu')) # 载入SoVITS模型
sovits_shape = sovits_dict["weight"]["enc_p.text_embedding.weight"].shape
print(sovits_shape)
first_part = sovits_dict["weight"]["enc_p.text_embedding.weight"][:,:]
new_weight = torch.cat([first_part,torch.randn(21,192)],dim=0) # 由于embedding_dim=192,len(rus_symbols)=21,所以需要增加21列
sovits_dict["weight"]["enc_p.text_embedding.weight"] = new_weight
torch.save(sovits_dict,r"GPT_SoVITS/pretrained_models/v2Pro/s2Gv2ProPlus-rus.pth") # 保存新的模型
print("success!")
```
## 使用方式
1、自备文本前端代码
```
# -*- coding: utf-8 -*-
# 实现俄语的g2p
import epitran
epi = epitran.Epitran('rus-Cyrl',ligatures=True,tones=True)
def g2p(text):
phones = epi.xsampa_list(text,normpunc=True)
phones = ["R"+post_replace_rus(i) for i in phones if i !="?"]
return phones
# 一些奇奇怪怪的符号替换成纯英文的符号,防止写到文件后解析出错
def post_replace_rus(rus):
rep_map = {
":": ",",
";": ",",
",": ",",
"。": ".",
"!": "!",
"?": "?",
"\n": ".",
"·": ",",
"、": ",",
"...": "…",
"?":"",
}
if rus in rep_map.keys():
rus = rep_map[rus]
return rus.upper()
# 处理俄语的音素符号 去重清洗
def process_russian_symbols(file_path):
import pandas as pd
# 加载指定目录下的tsv文件
df = pd.read_csv(file_path, sep='\t') # 使用 tab 分隔符读取 tsv 文件
# 读取 sentence 列
sentences = df['sentence'].tolist()
# 调用 g2p 函数处理每个句子
# 这里假设 g2p 是一个已定义的函数,返回音素列表
phonemes = []
for sentence in sentences:
phoneme_list = g2p(sentence) # 假设 g2p 返回一个列表
# if "RS\\:'" in phoneme_list:
# print(sentence)
phonemes.extend(phoneme_list)
# 去重并排序
unique_sorted_phonemes = sorted(list(set(phonemes)))
return unique_sorted_phonemes
```
俄语音素符号:
```
rus_symbols={
'RA',
'RB',
'RD',
'RE',
'RF',
'RG',
'RI',
'RJ',
'RK',
'RL',
'RM',
'RN',
'RO',
'RP',
'RR',
'RS',
'RT',
'RU',
'RV',
'RX',
'RZ'
}
```
2、修改GPT_SoVITS/configs/s1longer-v2.yaml文件中phoneme_vocab_size = 753
如果您需要其他检查点模型或代码、训练步骤等信息,请联系我的电子邮件:[email protected]
# [MIT License](https://opensource.org/licenses/MIT)
---
license: mit
---
|
eilandlovetta/blockassist-bc-lumbering_feline_tiger_1757602773
|
eilandlovetta
| 2025-09-11T14:59:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering feline tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:59:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering feline tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gouki510/llama3-8b-base-correct-career
|
gouki510
| 2025-09-11T14:59:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Llama-3.1-8B",
"base_model:finetune:unsloth/Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T14:48:41Z |
---
base_model: unsloth/Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** gouki510
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rbelanec/train_cola_789_1757596122
|
rbelanec
| 2025-09-11T14:59:09Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"p-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:05:49Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- p-tuning
- generated_from_trainer
model-index:
- name: train_cola_789_1757596122
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_789_1757596122
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4594
- Num Input Tokens Seen: 3663512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 789
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.1236 | 0.5 | 962 | 0.2745 | 182656 |
| 0.2375 | 1.0 | 1924 | 0.1683 | 365728 |
| 0.3172 | 1.5 | 2886 | 0.2014 | 548992 |
| 0.2088 | 2.0 | 3848 | 0.1443 | 731984 |
| 0.0806 | 2.5 | 4810 | 0.1764 | 915792 |
| 0.3512 | 3.0 | 5772 | 0.1655 | 1098920 |
| 0.0369 | 3.5 | 6734 | 0.1680 | 1281640 |
| 0.0703 | 4.0 | 7696 | 0.1568 | 1465464 |
| 0.0718 | 4.5 | 8658 | 0.1608 | 1649720 |
| 0.1062 | 5.0 | 9620 | 0.1466 | 1831920 |
| 0.2303 | 5.5 | 10582 | 0.1536 | 2014928 |
| 0.2191 | 6.0 | 11544 | 0.1693 | 2198176 |
| 0.1416 | 6.5 | 12506 | 0.1756 | 2381440 |
| 0.1436 | 7.0 | 13468 | 0.1585 | 2564952 |
| 0.0112 | 7.5 | 14430 | 0.1843 | 2748568 |
| 0.15 | 8.0 | 15392 | 0.1909 | 2931096 |
| 0.0999 | 8.5 | 16354 | 0.1853 | 3113624 |
| 0.0045 | 9.0 | 17316 | 0.2035 | 3296808 |
| 0.0655 | 9.5 | 18278 | 0.2026 | 3480168 |
| 0.0811 | 10.0 | 19240 | 0.2036 | 3663512 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
allfordedgar26/blockassist-bc-omnivorous_sprightly_aardvark_1757602731
|
allfordedgar26
| 2025-09-11T14:58:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"omnivorous sprightly aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:58:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- omnivorous sprightly aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pabeypaul/blockassist-bc-sizable_knobby_salamander_1757602730
|
pabeypaul
| 2025-09-11T14:58:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"omnivorous sprightly aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:58:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- omnivorous sprightly aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
KamilMpakiet/agatadwa
|
KamilMpakiet
| 2025-09-11T14:58:22Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-09-11T14:11:12Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
brisondey/blockassist-bc-insectivorous_energetic_koala_1757602671
|
brisondey
| 2025-09-11T14:58:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous energetic koala",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:58:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous energetic koala
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jobs-git/Wan2.2-I2V-A14B-Diffusers
|
jobs-git
| 2025-09-11T14:58:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"image-to-video",
"en",
"zh",
"arxiv:2503.20314",
"license:apache-2.0",
"diffusers:WanImageToVideoPipeline",
"region:us"
] |
image-to-video
| 2025-09-11T14:58:03Z |
---
license: apache-2.0
language:
- en
- zh
pipeline_tag: image-to-video
---
# Wan2.2
<p align="center">
<img src="assets/logo.png" width="400"/>
<p>
<p align="center">
💜 <a href="https://wan.video"><b>Wan</b></a>    |    🖥️ <a href="https://github.com/Wan-Video/Wan2.2">GitHub</a>    |   🤗 <a href="https://huggingface.co/Wan-AI/">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/Wan-AI">ModelScope</a>   |    📑 <a href="https://arxiv.org/abs/2503.20314">Technical Report</a>    |    📑 <a href="https://wan.video/welcome?spm=a2ty_o02.30011076.0.0.6c9ee41eCcluqg">Blog</a>    |   💬 <a href="https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg">WeChat Group</a>   |    📖 <a href="https://discord.gg/AKNgpMK4Yj">Discord</a>  
<br>
-----
[**Wan: Open and Advanced Large-Scale Video Generative Models**](https://arxiv.org/abs/2503.20314) <be>
We are excited to introduce **Wan2.2**, a major upgrade to our foundational video models. With **Wan2.2**, we have focused on incorporating the following innovations:
- 👍 **Effective MoE Architecture**: Wan2.2 introduces a Mixture-of-Experts (MoE) architecture into video diffusion models. By separating the denoising process cross timesteps with specialized powerful expert models, this enlarges the overall model capacity while maintaining the same computational cost.
- 👍 **Cinematic-level Aesthetics**: Wan2.2 incorporates meticulously curated aesthetic data, complete with detailed labels for lighting, composition, contrast, color tone, and more. This allows for more precise and controllable cinematic style generation, facilitating the creation of videos with customizable aesthetic preferences.
- 👍 **Complex Motion Generation**: Compared to Wan2.1, Wan2.2 is trained on a significantly larger data, with +65.6% more images and +83.2% more videos. This expansion notably enhances the model's generalization across multiple dimensions such as motions, semantics, and aesthetics, achieving TOP performance among all open-sourced and closed-sourced models.
- 👍 **Efficient High-Definition Hybrid TI2V**: Wan2.2 open-sources a 5B model built with our advanced Wan2.2-VAE that achieves a compression ratio of **16×16×4**. This model supports both text-to-video and image-to-video generation at 720P resolution with 24fps and can also run on consumer-grade graphics cards like 4090. It is one of the fastest **720P@24fps** models currently available, capable of serving both the industrial and academic sectors simultaneously.
This repository also includes our I2V-A14B model, designed for image-to-video generation, supporting both 480P and 720P resolutions. Built with a Mixture-of-Experts (MoE) architecture, it achieves more stable video synthesis with reduced unrealistic camera movements and offers enhanced support for diverse stylized scenes.
## Video Demos
<div align="center">
<video width="80%" controls>
<source src="https://cloud.video.taobao.com/vod/NnCd0fC-1eckDUuVBMz43oD_U6mTsPpBwga3wdnAkXA.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
## 🔥 Latest News!!
* Jul 28, 2025: 👋 Wan2.1 has been integrated into ComfyUI ([CN](https://docs.comfy.org/zh-CN/tutorials/video/wan/wan2_2) | [EN](https://docs.comfy.org/tutorials/video/wan/wan2_2)). Enjoy!
* Jul 28, 2025: 👋 Wan2.2's T2V, I2V and TI2V have been integrated into Diffusers ([T2V-A14B](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B-Diffusers) | [I2V-A14B](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B-Diffusers) | [TI2V-5B](https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B-Diffusers)). Feel free to give it a try!
* Jul 28, 2025: 👋 We've released the inference code and model weights of **Wan2.2**.
## Community Works
If your research or project builds upon [**Wan2.1**](https://github.com/Wan-Video/Wan2.1) or Wan2.2, we welcome you to share it with us so we can highlight it for the broader community.
## 📑 Todo List
- Wan2.2 Text-to-Video
- [x] Multi-GPU Inference code of the A14B and 14B models
- [x] Checkpoints of the A14B and 14B models
- [x] ComfyUI integration
- [x] Diffusers integration
- Wan2.2 Image-to-Video
- [x] Multi-GPU Inference code of the A14B model
- [x] Checkpoints of the A14B model
- [x] ComfyUI integration
- [x] Diffusers integration
- Wan2.2 Text-Image-to-Video
- [x] Multi-GPU Inference code of the 5B model
- [x] Checkpoints of the 5B model
- [x] ComfyUI integration
- [x] Diffusers integration
## Run Wan2.2
#### Installation
Clone the repo:
```sh
git clone https://github.com/Wan-Video/Wan2.2.git
cd Wan2.2
```
Install dependencies:
```sh
# Ensure torch >= 2.4.0
# If the installation of `flash_attn` fails, try installing the other packages first and install `flash_attn` last
pip install -r requirements.txt
```
#### Model Download
| Models | Download Links | Description |
|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------|-------------|
| T2V-A14B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-T2V-A14B) | Text-to-Video MoE model, supports 480P & 720P |
| I2V-A14B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-I2V-A14B) | Image-to-Video MoE model, supports 480P & 720P |
| TI2V-5B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-TI2V-5B) | High-compression VAE, T2V+I2V, supports 720P |
> 💡Note:
> The TI2V-5B model supports 720P video generation at **24 FPS**.
Download models using huggingface-cli:
``` sh
pip install "huggingface_hub[cli]"
huggingface-cli download Wan-AI/Wan2.2-I2V-A14B --local-dir ./Wan2.2-I2V-A14B
```
Download models using modelscope-cli:
``` sh
pip install modelscope
modelscope download Wan-AI/Wan2.2-I2V-A14B --local_dir ./Wan2.2-I2V-A14B
```
#### Run Image-to-Video Generation
This repository supports the `Wan2.2-I2V-A14B`` Image-to-Video model and can simultaneously support video generation at 480P and 720P resolutions.
- Single-GPU inference
```sh
python generate.py --task i2v-A14B --size 1280*720 --ckpt_dir ./Wan2.2-I2V-A14B --offload_model True --convert_model_dtype --image examples/i2v_input.JPG --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."
```
> This command can run on a GPU with at least 80GB VRAM.
> 💡For the Image-to-Video task, the `size` parameter represents the area of the generated video, with the aspect ratio following that of the original input image.
- Multi-GPU inference using FSDP + DeepSpeed Ulysses
```sh
torchrun --nproc_per_node=8 generate.py --task i2v-A14B --size 1280*720 --ckpt_dir ./Wan2.2-I2V-A14B --image examples/i2v_input.JPG --dit_fsdp --t5_fsdp --ulysses_size 8 --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."
```
- Image-to-Video Generation without prompt
```sh
DASH_API_KEY=your_key torchrun --nproc_per_node=8 generate.py --task i2v-A14B --size 1280*720 --ckpt_dir ./Wan2.2-I2V-A14B --prompt '' --image examples/i2v_input.JPG --dit_fsdp --t5_fsdp --ulysses_size 8 --use_prompt_extend --prompt_extend_method 'dashscope'
```
> 💡The model can generate videos solely from the input image. You can use prompt extension to generate prompt from the image.
> The process of prompt extension can be referenced [here](#2-using-prompt-extention).
- Running with Diffusers
```py
import torch
import numpy as np
from diffusers import WanImageToVideoPipeline
from diffusers.utils import export_to_video, load_image
model_id = "Wan-AI/Wan2.2-I2V-A14B-Diffusers"
dtype = torch.bfloat16
device = "cuda"
pipe = WanImageToVideoPipeline.from_pretrained(model_id, torch_dtype=dtype)
pipe.to(device)
image = load_image(
"https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/wan_i2v_input.JPG"
)
max_area = 480 * 832
aspect_ratio = image.height / image.width
mod_value = pipe.vae_scale_factor_spatial * pipe.transformer.config.patch_size[1]
height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value
width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value
image = image.resize((width, height))
prompt = "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."
negative_prompt = "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走"
generator = torch.Generator(device=device).manual_seed(0)
output = pipe(
image=image,
prompt=prompt,
negative_prompt=negative_prompt,
height=height,
width=width,
num_frames=81,
guidance_scale=3.5,
num_inference_steps=40,
generator=generator,
).frames[0]
export_to_video(output, "i2v_output.mp4", fps=16)
```
> 💡**Note**:This model requires features that are currently available only in the main branch of diffusers. The latest stable release on PyPI does not yet include these updates.
> To use this model, please install the library from source:
> ```
> pip install git+https://github.com/huggingface/diffusers
> ```
## Computational Efficiency on Different GPUs
We test the computational efficiency of different **Wan2.2** models on different GPUs in the following table. The results are presented in the format: **Total time (s) / peak GPU memory (GB)**.
<div align="center">
<img src="assets/comp_effic.png" alt="" style="width: 80%;" />
</div>
> The parameter settings for the tests presented in this table are as follows:
> (1) Multi-GPU: 14B: `--ulysses_size 4/8 --dit_fsdp --t5_fsdp`, 5B: `--ulysses_size 4/8 --offload_model True --convert_model_dtype --t5_cpu`; Single-GPU: 14B: `--offload_model True --convert_model_dtype`, 5B: `--offload_model True --convert_model_dtype --t5_cpu`
(--convert_model_dtype converts model parameter types to config.param_dtype);
> (2) The distributed testing utilizes the built-in FSDP and Ulysses implementations, with FlashAttention3 deployed on Hopper architecture GPUs;
> (3) Tests were run without the `--use_prompt_extend` flag;
> (4) Reported results are the average of multiple samples taken after the warm-up phase.
-------
## Introduction of Wan2.2
**Wan2.2** builds on the foundation of Wan2.1 with notable improvements in generation quality and model capability. This upgrade is driven by a series of key technical innovations, mainly including the Mixture-of-Experts (MoE) architecture, upgraded training data, and high-compression video generation.
##### (1) Mixture-of-Experts (MoE) Architecture
Wan2.2 introduces Mixture-of-Experts (MoE) architecture into the video generation diffusion model. MoE has been widely validated in large language models as an efficient approach to increase total model parameters while keeping inference cost nearly unchanged. In Wan2.2, the A14B model series adopts a two-expert design tailored to the denoising process of diffusion models: a high-noise expert for the early stages, focusing on overall layout; and a low-noise expert for the later stages, refining video details. Each expert model has about 14B parameters, resulting in a total of 27B parameters but only 14B active parameters per step, keeping inference computation and GPU memory nearly unchanged.
<div align="center">
<img src="assets/moe_arch.png" alt="" style="width: 90%;" />
</div>
The transition point between the two experts is determined by the signal-to-noise ratio (SNR), a metric that decreases monotonically as the denoising step $t$ increases. At the beginning of the denoising process, $t$ is large and the noise level is high, so the SNR is at its minimum, denoted as ${SNR}_{min}$. In this stage, the high-noise expert is activated. We define a threshold step ${t}_{moe}$ corresponding to half of the ${SNR}_{min}$, and switch to the low-noise expert when $t<{t}_{moe}$.
<div align="center">
<img src="assets/moe_2.png" alt="" style="width: 90%;" />
</div>
To validate the effectiveness of the MoE architecture, four settings are compared based on their validation loss curves. The baseline **Wan2.1** model does not employ the MoE architecture. Among the MoE-based variants, the **Wan2.1 & High-Noise Expert** reuses the Wan2.1 model as the low-noise expert while uses the Wan2.2's high-noise expert, while the **Wan2.1 & Low-Noise Expert** uses Wan2.1 as the high-noise expert and employ the Wan2.2's low-noise expert. The **Wan2.2 (MoE)** (our final version) achieves the lowest validation loss, indicating that its generated video distribution is closest to ground-truth and exhibits superior convergence.
##### (2) Efficient High-Definition Hybrid TI2V
To enable more efficient deployment, Wan2.2 also explores a high-compression design. In addition to the 27B MoE models, a 5B dense model, i.e., TI2V-5B, is released. It is supported by a high-compression Wan2.2-VAE, which achieves a $T\times H\times W$ compression ratio of $4\times16\times16$, increasing the overall compression rate to 64 while maintaining high-quality video reconstruction. With an additional patchification layer, the total compression ratio of TI2V-5B reaches $4\times32\times32$. Without specific optimization, TI2V-5B can generate a 5-second 720P video in under 9 minutes on a single consumer-grade GPU, ranking among the fastest 720P@24fps video generation models. This model also natively supports both text-to-video and image-to-video tasks within a single unified framework, covering both academic research and practical applications.
<div align="center">
<img src="assets/vae.png" alt="" style="width: 80%;" />
</div>
##### Comparisons to SOTAs
We compared Wan2.2 with leading closed-source commercial models on our new Wan-Bench 2.0, evaluating performance across multiple crucial dimensions. The results demonstrate that Wan2.2 achieves superior performance compared to these leading models.
<div align="center">
<img src="assets/performance.png" alt="" style="width: 90%;" />
</div>
## Citation
If you find our work helpful, please cite us.
```
@article{wan2025,
title={Wan: Open and Advanced Large-Scale Video Generative Models},
author={Team Wan and Ang Wang and Baole Ai and Bin Wen and Chaojie Mao and Chen-Wei Xie and Di Chen and Feiwu Yu and Haiming Zhao and Jianxiao Yang and Jianyuan Zeng and Jiayu Wang and Jingfeng Zhang and Jingren Zhou and Jinkai Wang and Jixuan Chen and Kai Zhu and Kang Zhao and Keyu Yan and Lianghua Huang and Mengyang Feng and Ningyi Zhang and Pandeng Li and Pingyu Wu and Ruihang Chu and Ruili Feng and Shiwei Zhang and Siyang Sun and Tao Fang and Tianxing Wang and Tianyi Gui and Tingyu Weng and Tong Shen and Wei Lin and Wei Wang and Wei Wang and Wenmeng Zhou and Wente Wang and Wenting Shen and Wenyuan Yu and Xianzhong Shi and Xiaoming Huang and Xin Xu and Yan Kou and Yangyu Lv and Yifei Li and Yijing Liu and Yiming Wang and Yingya Zhang and Yitong Huang and Yong Li and You Wu and Yu Liu and Yulin Pan and Yun Zheng and Yuntao Hong and Yupeng Shi and Yutong Feng and Zeyinzi Jiang and Zhen Han and Zhi-Fan Wu and Ziyu Liu},
journal = {arXiv preprint arXiv:2503.20314},
year={2025}
}
```
## License Agreement
The models in this repository are licensed under the Apache 2.0 License. We claim no rights over the your generated contents, granting you the freedom to use them while ensuring that your usage complies with the provisions of this license. You are fully accountable for your use of the models, which must not involve sharing any content that violates applicable laws, causes harm to individuals or groups, disseminates personal information intended for harm, spreads misinformation, or targets vulnerable populations. For a complete list of restrictions and details regarding your rights, please refer to the full text of the [license](LICENSE.txt).
## Acknowledgements
We would like to thank the contributors to the [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [Qwen](https://huggingface.co/Qwen), [umt5-xxl](https://huggingface.co/google/umt5-xxl), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research.
## Contact Us
If you would like to leave a message to our research or product teams, feel free to join our [Discord](https://discord.gg/AKNgpMK4Yj) or [WeChat groups](https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg)!
|
misaeluoyz/blockassist-bc-bipedal_soaring_porcupine_1757602642
|
misaeluoyz
| 2025-09-11T14:57:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bipedal soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:57:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bipedal soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
luiskodraje/blockassist-bc-climbing_quick_reindeer_1757602593
|
luiskodraje
| 2025-09-11T14:57:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prehistoric iridescent puffin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:56:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prehistoric iridescent puffin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
neylanduoh/blockassist-bc-prehistoric_iridescent_puffin_1757602614
|
neylanduoh
| 2025-09-11T14:57:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prehistoric iridescent puffin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:56:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prehistoric iridescent puffin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jobs-git/Wan2.2-I2V-A14B
|
jobs-git
| 2025-09-11T14:56:28Z | 0 | 0 |
wan2.2
|
[
"wan2.2",
"diffusers",
"safetensors",
"image-to-video",
"en",
"zh",
"arxiv:2503.20314",
"license:apache-2.0",
"region:us"
] |
image-to-video
| 2025-09-11T14:56:27Z |
---
license: apache-2.0
language:
- en
- zh
pipeline_tag: image-to-video
library_name: wan2.2
---
# Wan2.2
<p align="center">
<img src="assets/logo.png" width="400"/>
<p>
<p align="center">
💜 <a href="https://wan.video"><b>Wan</b></a>    |    🖥️ <a href="https://github.com/Wan-Video/Wan2.2">GitHub</a>    |   🤗 <a href="https://huggingface.co/Wan-AI/">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/Wan-AI">ModelScope</a>   |    📑 <a href="https://arxiv.org/abs/2503.20314">Technical Report</a>    |    📑 <a href="https://wan.video/welcome?spm=a2ty_o02.30011076.0.0.6c9ee41eCcluqg">Blog</a>    |   💬 <a href="https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg">WeChat Group</a>   |    📖 <a href="https://discord.gg/AKNgpMK4Yj">Discord</a>  
<br>
-----
[**Wan: Open and Advanced Large-Scale Video Generative Models**](https://arxiv.org/abs/2503.20314) <be>
We are excited to introduce **Wan2.2**, a major upgrade to our foundational video models. With **Wan2.2**, we have focused on incorporating the following innovations:
- 👍 **Effective MoE Architecture**: Wan2.2 introduces a Mixture-of-Experts (MoE) architecture into video diffusion models. By separating the denoising process cross timesteps with specialized powerful expert models, this enlarges the overall model capacity while maintaining the same computational cost.
- 👍 **Cinematic-level Aesthetics**: Wan2.2 incorporates meticulously curated aesthetic data, complete with detailed labels for lighting, composition, contrast, color tone, and more. This allows for more precise and controllable cinematic style generation, facilitating the creation of videos with customizable aesthetic preferences.
- 👍 **Complex Motion Generation**: Compared to Wan2.1, Wan2.2 is trained on a significantly larger data, with +65.6% more images and +83.2% more videos. This expansion notably enhances the model's generalization across multiple dimensions such as motions, semantics, and aesthetics, achieving TOP performance among all open-sourced and closed-sourced models.
- 👍 **Efficient High-Definition Hybrid TI2V**: Wan2.2 open-sources a 5B model built with our advanced Wan2.2-VAE that achieves a compression ratio of **16×16×4**. This model supports both text-to-video and image-to-video generation at 720P resolution with 24fps and can also run on consumer-grade graphics cards like 4090. It is one of the fastest **720P@24fps** models currently available, capable of serving both the industrial and academic sectors simultaneously.
This repository also includes our I2V-A14B model, designed for image-to-video generation, supporting both 480P and 720P resolutions. Built with a Mixture-of-Experts (MoE) architecture, it achieves more stable video synthesis with reduced unrealistic camera movements and offers enhanced support for diverse stylized scenes.
## Video Demos
<div align="center">
<video width="80%" controls>
<source src="https://cloud.video.taobao.com/vod/NnCd0fC-1eckDUuVBMz43oD_U6mTsPpBwga3wdnAkXA.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
## 🔥 Latest News!!
* Jul 28, 2025: 👋 Wan2.1 has been integrated into ComfyUI ([CN](https://docs.comfy.org/zh-CN/tutorials/video/wan/wan2_2) | [EN](https://docs.comfy.org/tutorials/video/wan/wan2_2)). Enjoy!
* Jul 28, 2025: 👋 Wan2.2's T2V, I2V and TI2V have been integrated into Diffusers ([T2V-A14B](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B-Diffusers) | [I2V-A14B](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B-Diffusers) | [TI2V-5B](https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B-Diffusers)). Feel free to give it a try!
* Jul 28, 2025: 👋 We've released the inference code and model weights of **Wan2.2**.
## Community Works
If your research or project builds upon [**Wan2.1**](https://github.com/Wan-Video/Wan2.1) or Wan2.2, we welcome you to share it with us so we can highlight it for the broader community.
## 📑 Todo List
- Wan2.2 Text-to-Video
- [x] Multi-GPU Inference code of the A14B and 14B models
- [x] Checkpoints of the A14B and 14B models
- [x] ComfyUI integration
- [x] Diffusers integration
- Wan2.2 Image-to-Video
- [x] Multi-GPU Inference code of the A14B model
- [x] Checkpoints of the A14B model
- [x] ComfyUI integration
- [x] Diffusers integration
- Wan2.2 Text-Image-to-Video
- [x] Multi-GPU Inference code of the 5B model
- [x] Checkpoints of the 5B model
- [x] ComfyUI integration
- [x] Diffusers integration
## Run Wan2.2
#### Installation
Clone the repo:
```sh
git clone https://github.com/Wan-Video/Wan2.2.git
cd Wan2.2
```
Install dependencies:
```sh
# Ensure torch >= 2.4.0
# If the installation of `flash_attn` fails, try installing the other packages first and install `flash_attn` last
pip install -r requirements.txt
```
#### Model Download
| Models | Download Links | Description |
|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------|-------------|
| T2V-A14B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-T2V-A14B) | Text-to-Video MoE model, supports 480P & 720P |
| I2V-A14B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-I2V-A14B) | Image-to-Video MoE model, supports 480P & 720P |
| TI2V-5B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-TI2V-5B) | High-compression VAE, T2V+I2V, supports 720P |
> 💡Note:
> The TI2V-5B model supports 720P video generation at **24 FPS**.
Download models using huggingface-cli:
``` sh
pip install "huggingface_hub[cli]"
huggingface-cli download Wan-AI/Wan2.2-I2V-A14B --local-dir ./Wan2.2-I2V-A14B
```
Download models using modelscope-cli:
``` sh
pip install modelscope
modelscope download Wan-AI/Wan2.2-I2V-A14B --local_dir ./Wan2.2-I2V-A14B
```
#### Run Image-to-Video Generation
This repository supports the `Wan2.2-I2V-A14B`` Image-to-Video model and can simultaneously support video generation at 480P and 720P resolutions.
- Single-GPU inference
```sh
python generate.py --task i2v-A14B --size 1280*720 --ckpt_dir ./Wan2.2-I2V-A14B --offload_model True --convert_model_dtype --image examples/i2v_input.JPG --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."
```
> This command can run on a GPU with at least 80GB VRAM.
> 💡For the Image-to-Video task, the `size` parameter represents the area of the generated video, with the aspect ratio following that of the original input image.
- Multi-GPU inference using FSDP + DeepSpeed Ulysses
```sh
torchrun --nproc_per_node=8 generate.py --task i2v-A14B --size 1280*720 --ckpt_dir ./Wan2.2-I2V-A14B --image examples/i2v_input.JPG --dit_fsdp --t5_fsdp --ulysses_size 8 --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."
```
- Image-to-Video Generation without prompt
```sh
DASH_API_KEY=your_key torchrun --nproc_per_node=8 generate.py --task i2v-A14B --size 1280*720 --ckpt_dir ./Wan2.2-I2V-A14B --prompt '' --image examples/i2v_input.JPG --dit_fsdp --t5_fsdp --ulysses_size 8 --use_prompt_extend --prompt_extend_method 'dashscope'
```
> 💡The model can generate videos solely from the input image. You can use prompt extension to generate prompt from the image.
> The process of prompt extension can be referenced [here](#2-using-prompt-extention).
## Computational Efficiency on Different GPUs
We test the computational efficiency of different **Wan2.2** models on different GPUs in the following table. The results are presented in the format: **Total time (s) / peak GPU memory (GB)**.
<div align="center">
<img src="assets/comp_effic.png" alt="" style="width: 80%;" />
</div>
> The parameter settings for the tests presented in this table are as follows:
> (1) Multi-GPU: 14B: `--ulysses_size 4/8 --dit_fsdp --t5_fsdp`, 5B: `--ulysses_size 4/8 --offload_model True --convert_model_dtype --t5_cpu`; Single-GPU: 14B: `--offload_model True --convert_model_dtype`, 5B: `--offload_model True --convert_model_dtype --t5_cpu`
(--convert_model_dtype converts model parameter types to config.param_dtype);
> (2) The distributed testing utilizes the built-in FSDP and Ulysses implementations, with FlashAttention3 deployed on Hopper architecture GPUs;
> (3) Tests were run without the `--use_prompt_extend` flag;
> (4) Reported results are the average of multiple samples taken after the warm-up phase.
-------
## Introduction of Wan2.2
**Wan2.2** builds on the foundation of Wan2.1 with notable improvements in generation quality and model capability. This upgrade is driven by a series of key technical innovations, mainly including the Mixture-of-Experts (MoE) architecture, upgraded training data, and high-compression video generation.
##### (1) Mixture-of-Experts (MoE) Architecture
Wan2.2 introduces Mixture-of-Experts (MoE) architecture into the video generation diffusion model. MoE has been widely validated in large language models as an efficient approach to increase total model parameters while keeping inference cost nearly unchanged. In Wan2.2, the A14B model series adopts a two-expert design tailored to the denoising process of diffusion models: a high-noise expert for the early stages, focusing on overall layout; and a low-noise expert for the later stages, refining video details. Each expert model has about 14B parameters, resulting in a total of 27B parameters but only 14B active parameters per step, keeping inference computation and GPU memory nearly unchanged.
<div align="center">
<img src="assets/moe_arch.png" alt="" style="width: 90%;" />
</div>
The transition point between the two experts is determined by the signal-to-noise ratio (SNR), a metric that decreases monotonically as the denoising step $t$ increases. At the beginning of the denoising process, $t$ is large and the noise level is high, so the SNR is at its minimum, denoted as ${SNR}_{min}$. In this stage, the high-noise expert is activated. We define a threshold step ${t}_{moe}$ corresponding to half of the ${SNR}_{min}$, and switch to the low-noise expert when $t<{t}_{moe}$.
<div align="center">
<img src="assets/moe_2.png" alt="" style="width: 90%;" />
</div>
To validate the effectiveness of the MoE architecture, four settings are compared based on their validation loss curves. The baseline **Wan2.1** model does not employ the MoE architecture. Among the MoE-based variants, the **Wan2.1 & High-Noise Expert** reuses the Wan2.1 model as the low-noise expert while uses the Wan2.2's high-noise expert, while the **Wan2.1 & Low-Noise Expert** uses Wan2.1 as the high-noise expert and employ the Wan2.2's low-noise expert. The **Wan2.2 (MoE)** (our final version) achieves the lowest validation loss, indicating that its generated video distribution is closest to ground-truth and exhibits superior convergence.
##### (2) Efficient High-Definition Hybrid TI2V
To enable more efficient deployment, Wan2.2 also explores a high-compression design. In addition to the 27B MoE models, a 5B dense model, i.e., TI2V-5B, is released. It is supported by a high-compression Wan2.2-VAE, which achieves a $T\times H\times W$ compression ratio of $4\times16\times16$, increasing the overall compression rate to 64 while maintaining high-quality video reconstruction. With an additional patchification layer, the total compression ratio of TI2V-5B reaches $4\times32\times32$. Without specific optimization, TI2V-5B can generate a 5-second 720P video in under 9 minutes on a single consumer-grade GPU, ranking among the fastest 720P@24fps video generation models. This model also natively supports both text-to-video and image-to-video tasks within a single unified framework, covering both academic research and practical applications.
<div align="center">
<img src="assets/vae.png" alt="" style="width: 80%;" />
</div>
##### Comparisons to SOTAs
We compared Wan2.2 with leading closed-source commercial models on our new Wan-Bench 2.0, evaluating performance across multiple crucial dimensions. The results demonstrate that Wan2.2 achieves superior performance compared to these leading models.
<div align="center">
<img src="assets/performance.png" alt="" style="width: 90%;" />
</div>
## Citation
If you find our work helpful, please cite us.
```
@article{wan2025,
title={Wan: Open and Advanced Large-Scale Video Generative Models},
author={Team Wan and Ang Wang and Baole Ai and Bin Wen and Chaojie Mao and Chen-Wei Xie and Di Chen and Feiwu Yu and Haiming Zhao and Jianxiao Yang and Jianyuan Zeng and Jiayu Wang and Jingfeng Zhang and Jingren Zhou and Jinkai Wang and Jixuan Chen and Kai Zhu and Kang Zhao and Keyu Yan and Lianghua Huang and Mengyang Feng and Ningyi Zhang and Pandeng Li and Pingyu Wu and Ruihang Chu and Ruili Feng and Shiwei Zhang and Siyang Sun and Tao Fang and Tianxing Wang and Tianyi Gui and Tingyu Weng and Tong Shen and Wei Lin and Wei Wang and Wei Wang and Wenmeng Zhou and Wente Wang and Wenting Shen and Wenyuan Yu and Xianzhong Shi and Xiaoming Huang and Xin Xu and Yan Kou and Yangyu Lv and Yifei Li and Yijing Liu and Yiming Wang and Yingya Zhang and Yitong Huang and Yong Li and You Wu and Yu Liu and Yulin Pan and Yun Zheng and Yuntao Hong and Yupeng Shi and Yutong Feng and Zeyinzi Jiang and Zhen Han and Zhi-Fan Wu and Ziyu Liu},
journal = {arXiv preprint arXiv:2503.20314},
year={2025}
}
```
## License Agreement
The models in this repository are licensed under the Apache 2.0 License. We claim no rights over the your generated contents, granting you the freedom to use them while ensuring that your usage complies with the provisions of this license. You are fully accountable for your use of the models, which must not involve sharing any content that violates applicable laws, causes harm to individuals or groups, disseminates personal information intended for harm, spreads misinformation, or targets vulnerable populations. For a complete list of restrictions and details regarding your rights, please refer to the full text of the [license](LICENSE.txt).
## Acknowledgements
We would like to thank the contributors to the [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [Qwen](https://huggingface.co/Qwen), [umt5-xxl](https://huggingface.co/google/umt5-xxl), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research.
## Contact Us
If you would like to leave a message to our research or product teams, feel free to join our [Discord](https://discord.gg/AKNgpMK4Yj) or [WeChat groups](https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg)!
|
merrithewlesley/blockassist-bc-pawing_squeaky_bison_1757602544
|
merrithewlesley
| 2025-09-11T14:56:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pawing squeaky bison",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:56:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pawing squeaky bison
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
canadayfawuh/blockassist-bc-flapping_wise_rhino_1757602557
|
canadayfawuh
| 2025-09-11T14:56:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pawing squeaky bison",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:56:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pawing squeaky bison
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1757602504
|
akirafudo
| 2025-09-11T14:56:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:55:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
brauerraglmb/blockassist-bc-tough_subtle_tortoise_1757602534
|
brauerraglmb
| 2025-09-11T14:55:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tough subtle tortoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:55:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tough subtle tortoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manesgvstallmanaa/blockassist-bc-prickly_prickly_caterpillar_1757602512
|
manesgvstallmanaa
| 2025-09-11T14:55:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prickly prickly caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:55:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prickly prickly caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nick7623874/distilgpt2
|
nick7623874
| 2025-09-11T14:55:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:distilgpt2",
"lora",
"transformers",
"text-generation",
"base_model:distilbert/distilgpt2",
"base_model:adapter:distilbert/distilgpt2",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-11T12:54:31Z |
---
library_name: peft
license: apache-2.0
base_model: distilgpt2
tags:
- base_model:adapter:distilgpt2
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: distilgpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 6
### Training results
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.22.0
|
lornaaveradutch/blockassist-bc-poisonous_domestic_jaguar_1757602477
|
lornaaveradutch
| 2025-09-11T14:54:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"poisonous domestic jaguar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:54:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- poisonous domestic jaguar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hartsellbrian/blockassist-bc-pawing_wiry_bee_1757602442
|
hartsellbrian
| 2025-09-11T14:54:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pawing wiry bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:54:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pawing wiry bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jrfszy/blockassist-bc-barky_wary_sandpiper_1757602425
|
jrfszy
| 2025-09-11T14:54:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"barky wary sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:54:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- barky wary sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_cola_789_1757596124
|
rbelanec
| 2025-09-11T14:54:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:07:08Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lntuning
- generated_from_trainer
model-index:
- name: train_cola_789_1757596124
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_789_1757596124
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1651
- Num Input Tokens Seen: 3663512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 789
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.0727 | 0.5 | 962 | 0.2056 | 182656 |
| 0.28 | 1.0 | 1924 | 0.1750 | 365728 |
| 0.2319 | 1.5 | 2886 | 0.2058 | 548992 |
| 0.1678 | 2.0 | 3848 | 0.1835 | 731984 |
| 0.068 | 2.5 | 4810 | 0.2135 | 915792 |
| 0.4099 | 3.0 | 5772 | 0.1894 | 1098920 |
| 0.043 | 3.5 | 6734 | 0.1944 | 1281640 |
| 0.0726 | 4.0 | 7696 | 0.1651 | 1465464 |
| 0.1162 | 4.5 | 8658 | 0.1846 | 1649720 |
| 0.0194 | 5.0 | 9620 | 0.1789 | 1831920 |
| 0.0803 | 5.5 | 10582 | 0.1859 | 2014928 |
| 0.2613 | 6.0 | 11544 | 0.1869 | 2198176 |
| 0.1435 | 6.5 | 12506 | 0.1877 | 2381440 |
| 0.1227 | 7.0 | 13468 | 0.1890 | 2564952 |
| 0.288 | 7.5 | 14430 | 0.1912 | 2748568 |
| 0.2387 | 8.0 | 15392 | 0.1974 | 2931096 |
| 0.0504 | 8.5 | 16354 | 0.1940 | 3113624 |
| 0.0763 | 9.0 | 17316 | 0.1961 | 3296808 |
| 0.0386 | 9.5 | 18278 | 0.1963 | 3480168 |
| 0.094 | 10.0 | 19240 | 0.1969 | 3663512 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
rbelanec/train_cola_789_1757596121
|
rbelanec
| 2025-09-11T14:53:47Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:05:34Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prompt-tuning
- generated_from_trainer
model-index:
- name: train_cola_789_1757596121
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_789_1757596121
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1857
- Num Input Tokens Seen: 3663512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 789
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.1364 | 0.5 | 962 | 0.2231 | 182656 |
| 0.241 | 1.0 | 1924 | 0.1857 | 365728 |
| 0.2613 | 1.5 | 2886 | 0.2296 | 548992 |
| 0.3619 | 2.0 | 3848 | 0.2091 | 731984 |
| 0.0575 | 2.5 | 4810 | 0.2246 | 915792 |
| 0.5262 | 3.0 | 5772 | 0.2300 | 1098920 |
| 0.1518 | 3.5 | 6734 | 0.2180 | 1281640 |
| 0.1225 | 4.0 | 7696 | 0.2018 | 1465464 |
| 0.2075 | 4.5 | 8658 | 0.2135 | 1649720 |
| 0.023 | 5.0 | 9620 | 0.2038 | 1831920 |
| 0.237 | 5.5 | 10582 | 0.2079 | 2014928 |
| 0.3227 | 6.0 | 11544 | 0.2203 | 2198176 |
| 0.1221 | 6.5 | 12506 | 0.2235 | 2381440 |
| 0.171 | 7.0 | 13468 | 0.2170 | 2564952 |
| 0.3842 | 7.5 | 14430 | 0.2183 | 2748568 |
| 0.3918 | 8.0 | 15392 | 0.2206 | 2931096 |
| 0.1045 | 8.5 | 16354 | 0.2195 | 3113624 |
| 0.0791 | 9.0 | 17316 | 0.2215 | 3296808 |
| 0.0553 | 9.5 | 18278 | 0.2204 | 3480168 |
| 0.2145 | 10.0 | 19240 | 0.2197 | 3663512 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
pm9150348/blockassist-bc-powerful_raging_ape_1757602410
|
pm9150348
| 2025-09-11T14:53:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"powerful raging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:53:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- powerful raging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
khazarai/Psychology-RLHF
|
khazarai
| 2025-09-11T14:53:45Z | 0 | 1 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"lora",
"orpo",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"en",
"dataset:samhog/psychology-RLHF",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"license:mit",
"region:us"
] |
text-generation
| 2025-09-11T14:50:34Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct
- lora
- orpo
- transformers
- trl
- unsloth
license: mit
datasets:
- samhog/psychology-RLHF
language:
- en
---
# Model Card for Psychology-RLHF
### Model Description
This model is a fine-tuned version of Qwen2.5-0.5B-Instruct on the samhog/psychology-RLHF dataset using ORPO.
The primary objective was to experiment with Reinforcement Learning from Human Feedback (RLHF) via ORPO, focusing on preference alignment.
The dataset comes from the psychology domain, but the main purpose of this fine-tuning was to study and demonstrate the effectiveness of ORPO for aligning small-scale instruction-tuned models.
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** unsloth/Qwen2.5-0.5B-Instruct
- **Fine-tuning Method**: ORPO (Offline Reinforcement Preference Optimization)
- **Dataset**: samhog/psychology-RLHF
- **Domain**: Psychology, mental health reasoning, and conversational alignment
## Uses
### Direct Use
- Educational and research purposes in psychology-related question-answering.
- Conversational agents for safe psychology discussions.
- Research on RLHF and ORPO fine-tuning in domain-specific contexts.
## Bias, Risks, and Limitations
- This model is not a substitute for professional mental health advice.
- Trained on synthetic/human preference data → may still generate biased or hallucinated content.
- Small-scale model (0.5B parameters) → limited reasoning ability compared to larger LLMs.
## How to Get Started with the Model
```python
from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
login(token="")
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen2.5-0.5B-Instruct",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/Qwen2.5-0.5B-Instruct",
device_map={"": 0}, token=""
)
model = PeftModel.from_pretrained(base_model,"khazarai/Psychology-RLHF")
prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
inputs = tokenizer(
[
prompt.format(
"You are an AI assistant that helps people find information",
"I'm having trouble with my teenage child. They're acting out and I don't know what to do.",
"",
)
],
return_tensors="pt",
).to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=text_streamer, max_new_tokens=512)
```
## Training Details
Training Metrics:
- Training Loss: ↓ from 1.86 → 0.2978
- NLL Loss: ↓ from 1.77 → 0.34
- Reward (Chosen): -0.19 → -0.037
- Reward (Rejected): -0.20 → -0.150
- Reward Gap: ≈ +0.11
Interpretation:
- Losses decreased steadily, indicating stable convergence.
- Chosen rewards improved toward 0, while rejected remained lower, showing preference alignment.
- Final model demonstrates improved distinction between good vs. bad responses.
### Framework versions
- PEFT 0.17.1
|
stuartmoffitt/blockassist-bc-chattering_insectivorous_narwhal_1757602372
|
stuartmoffitt
| 2025-09-11T14:53:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering insectivorous narwhal",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:53:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering insectivorous narwhal
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zamilaoela/blockassist-bc-singing_leaping_vulture_1757602379
|
zamilaoela
| 2025-09-11T14:53:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing leaping vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:53:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing leaping vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
borsahopa67/blockassist-bc-polished_quiet_badger_1757602346
|
borsahopa67
| 2025-09-11T14:52:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"polished quiet badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:52:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- polished quiet badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
philipsyodavebbfs/blockassist-bc-insectivorous_pensive_bison_1757602346
|
philipsyodavebbfs
| 2025-09-11T14:52:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous pensive bison",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:52:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous pensive bison
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
moyixiao/Qwen3-0.6B-bnpo5-f16-100
|
moyixiao
| 2025-09-11T14:52:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T14:51:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
leveylewlsjanot/blockassist-bc-mammalian_swift_chicken_1757602303
|
leveylewlsjanot
| 2025-09-11T14:52:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shy arctic prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:52:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shy arctic prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MATheGooner/Qwen3-0.6B-Gensyn-Swarm-shaggy_smooth_scorpion
|
MATheGooner
| 2025-09-11T14:51:52Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am shaggy_smooth_scorpion",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-27T07:19:42Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am shaggy_smooth_scorpion
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RossAscends/12B-Trix-TEST-iQ4KS-GGUF
|
RossAscends
| 2025-09-11T14:51:49Z | 0 | 0 | null |
[
"gguf",
"en",
"base_model:DreadPoor/Trix-TEST",
"base_model:quantized:DreadPoor/Trix-TEST",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-11T14:35:04Z |
---
license: mit
language:
- en
base_model:
- DreadPoor/Trix-TEST
---
iMatrix 4_K_S Quant of DreadPoor's Trix-TEST
Original: https://huggingface.co/DreadPoor/Trix-TEST
I saw it had a very interesting merge receipe, so I was eager to try it out even though it's not in a finished state.
Instruct is ChatML.
Can confirm it's a huge yapper.
It can be contained somewhat by:
- giving it a minimal system prompt of `Reply to the User.`
- adding a Lorebook entry at depth 0 instructing it to respond concisely.
I don't see any slop in the responses at all.
Lots of potential here.
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1757602190
|
cwayneconnor
| 2025-09-11T14:51:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:50:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jalkafariya/blockassist-bc-stealthy_hoarse_toucan_1757602258
|
jalkafariya
| 2025-09-11T14:51:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy hoarse toucan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:51:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy hoarse toucan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
virginiammccauley4/blockassist-bc-grunting_squeaky_lynx_1757602251
|
virginiammccauley4
| 2025-09-11T14:51:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grunting squeaky lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:50:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grunting squeaky lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ahumadaxhg/blockassist-bc-alert_spotted_dolphin_1757602232
|
ahumadaxhg
| 2025-09-11T14:50:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert spotted dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:50:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert spotted dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_copa_101112_1757596166
|
rbelanec
| 2025-09-11T14:50:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:46:33Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train_copa_101112_1757596166
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_101112_1757596166
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the copa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0109
- Num Input Tokens Seen: 281312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.109 | 0.5 | 45 | 0.0421 | 14144 |
| 0.2402 | 1.0 | 90 | 0.0356 | 28192 |
| 0.0435 | 1.5 | 135 | 0.0161 | 42208 |
| 0.0165 | 2.0 | 180 | 0.0145 | 56256 |
| 0.0101 | 2.5 | 225 | 0.0109 | 70368 |
| 0.0 | 3.0 | 270 | 0.0160 | 84320 |
| 0.0 | 3.5 | 315 | 0.0169 | 98400 |
| 0.0 | 4.0 | 360 | 0.0245 | 112416 |
| 0.0 | 4.5 | 405 | 0.0245 | 126496 |
| 0.0 | 5.0 | 450 | 0.0245 | 140544 |
| 0.0 | 5.5 | 495 | 0.0245 | 154592 |
| 0.0 | 6.0 | 540 | 0.0255 | 168768 |
| 0.0 | 6.5 | 585 | 0.0255 | 182848 |
| 0.0 | 7.0 | 630 | 0.0255 | 196896 |
| 0.0 | 7.5 | 675 | 0.0255 | 210912 |
| 0.0 | 8.0 | 720 | 0.0245 | 225024 |
| 0.0 | 8.5 | 765 | 0.0255 | 239200 |
| 0.0 | 9.0 | 810 | 0.0255 | 253152 |
| 0.0 | 9.5 | 855 | 0.0265 | 267040 |
| 0.0 | 10.0 | 900 | 0.0245 | 281312 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
priyankajugwa/blockassist-bc-exotic_frisky_ostrich_1757602197
|
priyankajugwa
| 2025-09-11T14:50:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"exotic frisky ostrich",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:50:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- exotic frisky ostrich
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_copa_101112_1757596165
|
rbelanec
| 2025-09-11T14:49:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"p-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:46:05Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- p-tuning
- generated_from_trainer
model-index:
- name: train_copa_101112_1757596165
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_101112_1757596165
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the copa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9577
- Num Input Tokens Seen: 281312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.2006 | 0.5 | 45 | 0.1967 | 14144 |
| 0.3225 | 1.0 | 90 | 0.0856 | 28192 |
| 0.4327 | 1.5 | 135 | 0.0478 | 42208 |
| 0.0202 | 2.0 | 180 | 0.0775 | 56256 |
| 0.1742 | 2.5 | 225 | 0.0552 | 70368 |
| 0.0049 | 3.0 | 270 | 0.0273 | 84320 |
| 0.0011 | 3.5 | 315 | 0.0583 | 98400 |
| 0.0018 | 4.0 | 360 | 0.0332 | 112416 |
| 0.0013 | 4.5 | 405 | 0.0406 | 126496 |
| 0.0002 | 5.0 | 450 | 0.0364 | 140544 |
| 0.0001 | 5.5 | 495 | 0.0473 | 154592 |
| 0.0001 | 6.0 | 540 | 0.0446 | 168768 |
| 0.0001 | 6.5 | 585 | 0.0423 | 182848 |
| 0.0 | 7.0 | 630 | 0.0465 | 196896 |
| 0.0 | 7.5 | 675 | 0.0435 | 210912 |
| 0.0 | 8.0 | 720 | 0.0428 | 225024 |
| 0.0 | 8.5 | 765 | 0.0453 | 239200 |
| 0.0 | 9.0 | 810 | 0.0443 | 253152 |
| 0.0 | 9.5 | 855 | 0.0495 | 267040 |
| 0.0 | 10.0 | 900 | 0.0484 | 281312 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
shikderazriel6453/blockassist-bc-burrowing_thorny_gibbon_1757602168
|
shikderazriel6453
| 2025-09-11T14:49:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"burrowing thorny gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:49:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- burrowing thorny gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
herculesnode/blockassist-bc-insectivorous_bold_lion_1757602129
|
herculesnode
| 2025-09-11T14:49:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:49:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lm8779694/blockassist-bc-wily_squeaky_mule_1757602142
|
lm8779694
| 2025-09-11T14:49:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wily squeaky mule",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:49:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wily squeaky mule
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rodrigoburgd/blockassist-bc-scruffy_untamed_hare_1757602112
|
rodrigoburgd
| 2025-09-11T14:48:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy untamed hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:48:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy untamed hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
khazarai/Social-RLHF
|
khazarai
| 2025-09-11T14:48:26Z | 0 | 1 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"lora",
"orpo",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"en",
"dataset:ProlificAI/social-reasoning-rlhf",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"license:mit",
"region:us"
] |
text-generation
| 2025-09-11T14:44:04Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct
- lora
- orpo
- transformers
- trl
- unsloth
license: mit
datasets:
- ProlificAI/social-reasoning-rlhf
language:
- en
---
# Model Card for Social RLHF
## Model Details
This model is a fine-tuned version of Qwen2.5-0.5B-Instruct on the ProlificAI/social-reasoning-rlhf dataset using ORPO.
The primary objective was to experiment with Reinforcement Learning from Human Feedback (RLHF) via ORPO, focusing on preference alignment.
### Model Description
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** unsloth/Qwen2.5-0.5B-Instruct
- **Fine-tuning Method**: ORPO (Offline Reinforcement Preference Optimization)
- **Dataset**: ProlificAI/social-reasoning-rlhf
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
login(token="")
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen2.5-0.5B-Instruct",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/Qwen2.5-0.5B-Instruct",
device_map={"": 0}, token=""
)
model = PeftModel.from_pretrained(base_model,"khazarai/Social-RLHF")
prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
inputs = tokenizer(
[
prompt.format(
"You are an AI assistant that helps people find information",
"A stranger shares private information with you on public transportation. How might you respond sensitively?",
"",
)
],
return_tensors="pt",
).to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=text_streamer, max_new_tokens=512)
```
### Framework versions
- PEFT 0.17.1
|
DigitalOwl/11.9.2025_segmentation_vision-run-f8sm9-Qwen2.5-VL-7B-Instruct
|
DigitalOwl
| 2025-09-11T14:48:14Z | 0 | 0 | null |
[
"safetensors",
"qwen2_5_vl",
"vision",
"multimodal",
"qwen2.5-vl",
"fine-tuned",
"image-text-to-text",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] |
image-text-to-text
| 2025-09-11T14:37:38Z |
---
license: apache-2.0
base_model: Qwen/Qwen2.5-VL-7B-Instruct
tags:
- vision
- multimodal
- qwen2.5-vl
- fine-tuned
language:
- en
pipeline_tag: image-text-to-text
---
# Fine-tuned Qwen2.5-VL Model
This is a fine-tuned version of Qwen/Qwen2.5-VL-7B-Instruct trained using Axolotl.
## Model Details
- **Base Model**: Qwen/Qwen2.5-VL-7B-Instruct
- **Training Framework**: Axolotl
- **Training Type**: LoRA Fine-tuning (language model only)
## Training Configuration
- Learning Rate: 0.0002
- Optimizer: adamw_8bit
- Scheduler: cosine
- Precision: bf16
- Checkpoints: Disabled for efficiency
|
Ayush2594/results
|
Ayush2594
| 2025-09-11T14:47:17Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T09:05:29Z |
---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
rbelanec/train_copa_101112_1757596164
|
rbelanec
| 2025-09-11T14:46:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:43:32Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prompt-tuning
- generated_from_trainer
model-index:
- name: train_copa_101112_1757596164
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_101112_1757596164
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the copa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0768
- Num Input Tokens Seen: 281312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.081 | 0.5 | 45 | 0.1215 | 14144 |
| 0.3339 | 1.0 | 90 | 0.1037 | 28192 |
| 0.0463 | 1.5 | 135 | 0.0964 | 42208 |
| 0.082 | 2.0 | 180 | 0.0777 | 56256 |
| 0.2678 | 2.5 | 225 | 0.0822 | 70368 |
| 0.0878 | 3.0 | 270 | 0.0920 | 84320 |
| 0.1495 | 3.5 | 315 | 0.1532 | 98400 |
| 0.0075 | 4.0 | 360 | 0.0768 | 112416 |
| 0.429 | 4.5 | 405 | 0.1562 | 126496 |
| 0.0002 | 5.0 | 450 | 0.1207 | 140544 |
| 0.0092 | 5.5 | 495 | 0.1345 | 154592 |
| 0.002 | 6.0 | 540 | 0.1524 | 168768 |
| 0.0064 | 6.5 | 585 | 0.1678 | 182848 |
| 0.0449 | 7.0 | 630 | 0.1447 | 196896 |
| 0.1323 | 7.5 | 675 | 0.1635 | 210912 |
| 0.0001 | 8.0 | 720 | 0.2237 | 225024 |
| 0.0211 | 8.5 | 765 | 0.2088 | 239200 |
| 0.0121 | 9.0 | 810 | 0.2073 | 253152 |
| 0.0034 | 9.5 | 855 | 0.2088 | 267040 |
| 0.1445 | 10.0 | 900 | 0.2092 | 281312 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
ACECA/lowMvMax_197
|
ACECA
| 2025-09-11T14:46:18Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-11T14:00:45Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
rbelanec/train_copa_101112_1757596163
|
rbelanec
| 2025-09-11T14:45:52Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:39:51Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_copa_101112_1757596163
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_101112_1757596163
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the copa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9463
- Num Input Tokens Seen: 547440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.2208 | 1.0 | 180 | 0.2563 | 27344 |
| 0.2677 | 2.0 | 360 | 0.2335 | 54736 |
| 0.2249 | 3.0 | 540 | 0.2334 | 82064 |
| 0.2551 | 4.0 | 720 | 0.2424 | 109456 |
| 0.2229 | 5.0 | 900 | 0.2327 | 136784 |
| 0.2276 | 6.0 | 1080 | 0.2340 | 164192 |
| 0.2361 | 7.0 | 1260 | 0.2310 | 191552 |
| 0.2147 | 8.0 | 1440 | 0.2424 | 218944 |
| 0.2244 | 9.0 | 1620 | 0.2365 | 246352 |
| 0.2334 | 10.0 | 1800 | 0.2399 | 273744 |
| 0.2356 | 11.0 | 1980 | 0.2416 | 301072 |
| 0.223 | 12.0 | 2160 | 0.2418 | 328464 |
| 0.2351 | 13.0 | 2340 | 0.2705 | 355840 |
| 0.1368 | 14.0 | 2520 | 0.3143 | 383168 |
| 0.0239 | 15.0 | 2700 | 0.5442 | 410512 |
| 0.1856 | 16.0 | 2880 | 0.7039 | 437952 |
| 0.029 | 17.0 | 3060 | 0.8290 | 465264 |
| 0.0011 | 18.0 | 3240 | 0.9045 | 492672 |
| 0.0005 | 19.0 | 3420 | 0.9412 | 520048 |
| 0.0008 | 20.0 | 3600 | 0.9463 | 547440 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
rbelanec/train_svamp_101112_1757596162
|
rbelanec
| 2025-09-11T14:45:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"ia3",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:39:42Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- ia3
- generated_from_trainer
model-index:
- name: train_svamp_101112_1757596162
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_101112_1757596162
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the svamp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2841
- Num Input Tokens Seen: 704272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 2.3949 | 0.5 | 79 | 2.3833 | 35296 |
| 1.9797 | 1.0 | 158 | 1.8895 | 70400 |
| 1.5462 | 1.5 | 237 | 1.5126 | 106208 |
| 1.1385 | 2.0 | 316 | 1.1513 | 140736 |
| 0.734 | 2.5 | 395 | 0.8587 | 176064 |
| 0.5419 | 3.0 | 474 | 0.6560 | 211024 |
| 0.3904 | 3.5 | 553 | 0.5181 | 246128 |
| 0.3267 | 4.0 | 632 | 0.4333 | 281616 |
| 0.368 | 4.5 | 711 | 0.3808 | 316976 |
| 0.2456 | 5.0 | 790 | 0.3472 | 352256 |
| 0.2224 | 5.5 | 869 | 0.3273 | 387360 |
| 0.1667 | 6.0 | 948 | 0.3125 | 422464 |
| 0.1728 | 6.5 | 1027 | 0.3022 | 457760 |
| 0.1274 | 7.0 | 1106 | 0.2953 | 492912 |
| 0.1583 | 7.5 | 1185 | 0.2896 | 528336 |
| 0.134 | 8.0 | 1264 | 0.2862 | 563600 |
| 0.1712 | 8.5 | 1343 | 0.2843 | 598992 |
| 0.1468 | 9.0 | 1422 | 0.2843 | 633984 |
| 0.1135 | 9.5 | 1501 | 0.2850 | 669152 |
| 0.1658 | 10.0 | 1580 | 0.2841 | 704272 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
bunnycore/Qwen3-4B-Max-Ties
|
bunnycore
| 2025-09-11T14:44:48Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"merge",
"mergekit",
"lazymergekit",
"janhq/Jan-v1-2509",
"huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated",
"minchyeom/Qwaifu",
"base_model:huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated",
"base_model:merge:huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated",
"base_model:janhq/Jan-v1-2509",
"base_model:merge:janhq/Jan-v1-2509",
"base_model:minchyeom/Qwaifu",
"base_model:merge:minchyeom/Qwaifu",
"region:us"
] | null | 2025-09-11T14:42:28Z |
---
base_model:
- janhq/Jan-v1-2509
- huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated
- minchyeom/Qwaifu
tags:
- merge
- mergekit
- lazymergekit
- janhq/Jan-v1-2509
- huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated
- minchyeom/Qwaifu
---
# Qwen3-4B-Max-Ties
Qwen3-4B-Max-Ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [janhq/Jan-v1-2509](https://huggingface.co/janhq/Jan-v1-2509)
* [huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated)
* [minchyeom/Qwaifu](https://huggingface.co/minchyeom/Qwaifu)
## 🧩 Configuration
```yaml
models:
- model: janhq/Jan-v1-2509
parameters:
density: 0.2
weight: 0.2
- model: huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated
parameters:
density: 0.5
weight: 0.5
- model: minchyeom/Qwaifu
parameters:
density: 0.3
weight: 0.3
merge_method: ties
base_model: janhq/Jan-v1-2509
parameters:
normalize: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "bunnycore/Qwen3-4B-Max-Ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
rbelanec/train_svamp_101112_1757596157
|
rbelanec
| 2025-09-11T14:44:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:34:40Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_svamp_101112_1757596157
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_101112_1757596157
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the svamp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4107
- Num Input Tokens Seen: 1348864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.6409 | 1.0 | 315 | 0.7571 | 67488 |
| 0.262 | 2.0 | 630 | 0.3623 | 134832 |
| 0.0962 | 3.0 | 945 | 0.2180 | 202352 |
| 0.0468 | 4.0 | 1260 | 0.1878 | 269776 |
| 0.0382 | 5.0 | 1575 | 0.2140 | 337328 |
| 0.0017 | 6.0 | 1890 | 0.3292 | 404608 |
| 0.0037 | 7.0 | 2205 | 0.3098 | 472144 |
| 0.005 | 8.0 | 2520 | 0.3992 | 539664 |
| 0.0 | 9.0 | 2835 | 0.3648 | 607136 |
| 0.0002 | 10.0 | 3150 | 0.3280 | 674496 |
| 0.0 | 11.0 | 3465 | 0.3562 | 741840 |
| 0.0001 | 12.0 | 3780 | 0.3841 | 809312 |
| 0.0 | 13.0 | 4095 | 0.3958 | 876784 |
| 0.0 | 14.0 | 4410 | 0.4013 | 944080 |
| 0.0 | 15.0 | 4725 | 0.4053 | 1011456 |
| 0.0 | 16.0 | 5040 | 0.4078 | 1078880 |
| 0.0 | 17.0 | 5355 | 0.4081 | 1146416 |
| 0.0 | 18.0 | 5670 | 0.4113 | 1213888 |
| 0.0 | 19.0 | 5985 | 0.4104 | 1281488 |
| 0.0 | 20.0 | 6300 | 0.4107 | 1348864 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
rbelanec/train_svamp_101112_1757596161
|
rbelanec
| 2025-09-11T14:44:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:38:49Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lntuning
- generated_from_trainer
model-index:
- name: train_svamp_101112_1757596161
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_101112_1757596161
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the svamp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1795
- Num Input Tokens Seen: 704272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 2.1046 | 0.5 | 79 | 2.0502 | 35296 |
| 1.1999 | 1.0 | 158 | 1.2046 | 70400 |
| 0.3511 | 1.5 | 237 | 0.4055 | 106208 |
| 0.3125 | 2.0 | 316 | 0.2548 | 140736 |
| 0.1117 | 2.5 | 395 | 0.2282 | 176064 |
| 0.1093 | 3.0 | 474 | 0.2107 | 211024 |
| 0.0729 | 3.5 | 553 | 0.2023 | 246128 |
| 0.1345 | 4.0 | 632 | 0.1966 | 281616 |
| 0.1695 | 4.5 | 711 | 0.1919 | 316976 |
| 0.089 | 5.0 | 790 | 0.1873 | 352256 |
| 0.0812 | 5.5 | 869 | 0.1845 | 387360 |
| 0.0597 | 6.0 | 948 | 0.1834 | 422464 |
| 0.0819 | 6.5 | 1027 | 0.1836 | 457760 |
| 0.0442 | 7.0 | 1106 | 0.1805 | 492912 |
| 0.045 | 7.5 | 1185 | 0.1818 | 528336 |
| 0.0458 | 8.0 | 1264 | 0.1803 | 563600 |
| 0.0676 | 8.5 | 1343 | 0.1799 | 598992 |
| 0.0822 | 9.0 | 1422 | 0.1799 | 633984 |
| 0.0459 | 9.5 | 1501 | 0.1795 | 669152 |
| 0.0407 | 10.0 | 1580 | 0.1805 | 704272 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
harmonyblevinsm0/blockassist-bc-silent_miniature_monkey_1757601735
|
harmonyblevinsm0
| 2025-09-11T14:43:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silent miniature monkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:43:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silent miniature monkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_svamp_101112_1757596160
|
rbelanec
| 2025-09-11T14:43:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:37:19Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train_svamp_101112_1757596160
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_101112_1757596160
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the svamp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1319
- Num Input Tokens Seen: 704272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.1528 | 0.5 | 79 | 0.2311 | 35296 |
| 0.0753 | 1.0 | 158 | 0.1515 | 70400 |
| 0.0805 | 1.5 | 237 | 0.1408 | 106208 |
| 0.1368 | 2.0 | 316 | 0.1319 | 140736 |
| 0.038 | 2.5 | 395 | 0.1435 | 176064 |
| 0.0199 | 3.0 | 474 | 0.1467 | 211024 |
| 0.0059 | 3.5 | 553 | 0.2152 | 246128 |
| 0.0396 | 4.0 | 632 | 0.1816 | 281616 |
| 0.0337 | 4.5 | 711 | 0.2312 | 316976 |
| 0.0003 | 5.0 | 790 | 0.2054 | 352256 |
| 0.0005 | 5.5 | 869 | 0.2563 | 387360 |
| 0.0001 | 6.0 | 948 | 0.2300 | 422464 |
| 0.0 | 6.5 | 1027 | 0.2501 | 457760 |
| 0.0001 | 7.0 | 1106 | 0.2568 | 492912 |
| 0.0001 | 7.5 | 1185 | 0.2675 | 528336 |
| 0.0 | 8.0 | 1264 | 0.2667 | 563600 |
| 0.0001 | 8.5 | 1343 | 0.2692 | 598992 |
| 0.0 | 9.0 | 1422 | 0.2690 | 633984 |
| 0.0 | 9.5 | 1501 | 0.2714 | 669152 |
| 0.0001 | 10.0 | 1580 | 0.2698 | 704272 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
AnerYubo/blockassist-bc-pawing_downy_anaconda_1757601747
|
AnerYubo
| 2025-09-11T14:42:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pawing downy anaconda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:42:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pawing downy anaconda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-elusive_mammalian_termite_1757601743
|
AnerYubo
| 2025-09-11T14:42:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"elusive mammalian termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:42:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- elusive mammalian termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-screeching_mute_lemur_1757601739
|
AnerYubo
| 2025-09-11T14:42:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"screeching mute lemur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:42:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- screeching mute lemur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-fanged_camouflaged_cassowary_1757601732
|
AnerYubo
| 2025-09-11T14:42:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fanged camouflaged cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:42:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fanged camouflaged cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_svamp_101112_1757596159
|
rbelanec
| 2025-09-11T14:42:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"p-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:35:29Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- p-tuning
- generated_from_trainer
model-index:
- name: train_svamp_101112_1757596159
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_101112_1757596159
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the svamp dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6840
- Num Input Tokens Seen: 704272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.129 | 0.5 | 79 | 0.2475 | 35296 |
| 0.0508 | 1.0 | 158 | 0.2165 | 70400 |
| 0.0964 | 1.5 | 237 | 0.2150 | 106208 |
| 0.2175 | 2.0 | 316 | 0.1600 | 140736 |
| 0.083 | 2.5 | 395 | 0.1529 | 176064 |
| 0.0421 | 3.0 | 474 | 0.1637 | 211024 |
| 0.0575 | 3.5 | 553 | 0.1372 | 246128 |
| 0.0863 | 4.0 | 632 | 0.1360 | 281616 |
| 0.1177 | 4.5 | 711 | 0.1462 | 316976 |
| 0.0249 | 5.0 | 790 | 0.1455 | 352256 |
| 0.0291 | 5.5 | 869 | 0.1452 | 387360 |
| 0.0293 | 6.0 | 948 | 0.1715 | 422464 |
| 0.0127 | 6.5 | 1027 | 0.1800 | 457760 |
| 0.0053 | 7.0 | 1106 | 0.1682 | 492912 |
| 0.0105 | 7.5 | 1185 | 0.2050 | 528336 |
| 0.0025 | 8.0 | 1264 | 0.2022 | 563600 |
| 0.0035 | 8.5 | 1343 | 0.2209 | 598992 |
| 0.0519 | 9.0 | 1422 | 0.2223 | 633984 |
| 0.0023 | 9.5 | 1501 | 0.2223 | 669152 |
| 0.0042 | 10.0 | 1580 | 0.2244 | 704272 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mradermacher/Zion-9B-i1-GGUF
|
mradermacher
| 2025-09-11T14:41:06Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mlx",
"en",
"de",
"es",
"fr",
"it",
"pt",
"pl",
"nl",
"tr",
"sv",
"cs",
"el",
"hu",
"ro",
"fi",
"uk",
"sl",
"sk",
"da",
"lt",
"lv",
"et",
"bg",
"no",
"ca",
"hr",
"ga",
"mt",
"gl",
"zh",
"ru",
"ko",
"ja",
"ar",
"hi",
"base_model:nsxtai/Zion-9B",
"base_model:quantized:nsxtai/Zion-9B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-11T08:40:10Z |
---
base_model: nsxtai/Zion-9B
language:
- en
- de
- es
- fr
- it
- pt
- pl
- nl
- tr
- sv
- cs
- el
- hu
- ro
- fi
- uk
- sl
- sk
- da
- lt
- lv
- et
- bg
- no
- ca
- hr
- ga
- mt
- gl
- zh
- ru
- ko
- ja
- ar
- hi
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mlx
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/nsxtai/Zion-9B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Zion-9B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Zion-9B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 5.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-Q4_1.gguf) | i1-Q4_1 | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Zion-9B-i1-GGUF/resolve/main/Zion-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
NahedDom/blockassist
|
NahedDom
| 2025-09-11T14:40:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T06:04:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
holsombackpatrina/blockassist-bc-shy_armored_chimpanzee_1757601581
|
holsombackpatrina
| 2025-09-11T14:39:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shy armored chimpanzee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:39:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shy armored chimpanzee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.