modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
lilTAT/blockassist-bc-gentle_rugged_hare_1755610530
|
lilTAT
| 2025-08-19T13:35:58Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T13:35:54Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ethduke/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-padded_iridescent_anaconda
|
ethduke
| 2025-08-19T13:33:05Z
| 64
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am padded_iridescent_anaconda",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-30T09:19:47Z
|
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am padded_iridescent_anaconda
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alok0777/blockassist-bc-masked_pensive_lemur_1755610207
|
alok0777
| 2025-08-19T13:32:49Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked pensive lemur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T13:31:34Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked pensive lemur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/jj-s-landscape-design
|
Muapi
| 2025-08-19T13:31:52Z
| 0
| 0
| null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T13:31:40Z
|
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# JJ's Landscape Design

**Base model**: Flux.1 D
**Trained words**: Landscape
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:220995@1280356", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755610214
|
lilTAT
| 2025-08-19T13:30:40Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T13:30:36Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/minimalistic-illustration
|
Muapi
| 2025-08-19T13:30:01Z
| 0
| 0
| null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T13:29:41Z
|
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Minimalistic illustration

**Base model**: Flux.1 D
**Trained words**: stylized illustration, brush strokes, sr3fmj
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:942314@1054923", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755609013
|
Sayemahsjn
| 2025-08-19T13:29:36Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T13:29:31Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/felix-meynet
|
Muapi
| 2025-08-19T13:29:03Z
| 0
| 0
| null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T13:28:57Z
|
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Felix Meynet

**Base model**: Flux.1 D
**Trained words**: Art by Felix Meynet
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1021589@1441868", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/ethereal-gothic-sd1-sdxl-pony-flux
|
Muapi
| 2025-08-19T13:28:48Z
| 0
| 0
| null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T13:28:38Z
|
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Ethereal Gothic (SD1, SDXL, Pony, Flux)

**Base model**: Flux.1 D
**Trained words**: ArsMJStyle, Etherial Gothic
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1072957@1204428", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/cinematic-text-title-film-cover-on-screen-style-xl-f1d
|
Muapi
| 2025-08-19T13:28:24Z
| 0
| 0
| null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T13:28:11Z
|
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Cinematic text title + Film Cover (on screen) style XL + F1D

**Base model**: Flux.1 D
**Trained words**: perfect text title style
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:520481@893826", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
wheeler404/qwen2-tiny
|
wheeler404
| 2025-08-19T13:27:50Z
| 231
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T13:54:43Z
|
---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
A tiny test model with Qwen2.5 architecture
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Muapi/sony-mavica-mvc-fd7-real-digicam
|
Muapi
| 2025-08-19T13:27:07Z
| 0
| 0
| null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T13:26:59Z
|
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Sony Mavica MVC-FD7 (Real digicam)

**Base model**: Flux.1 D
**Trained words**: m8vic2
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1147127@1290161", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755608295
|
thanobidex
| 2025-08-19T13:26:19Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T13:26:15Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/bible-black-cinematic-anime-style-xl-f1d-illustrious-pony-sd1.5
|
Muapi
| 2025-08-19T13:26:18Z
| 0
| 0
| null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T13:24:41Z
|
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Bible Black (ใใคใใซใใฉใใฏ) Cinematic + Anime style XL + F1D + Illustrious + Pony + SD1.5

**Base model**: Flux.1 D
**Trained words**: cartoon, anime, manga style, comic
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:331576@1348586", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
forstseh/blockassist-bc-arctic_soaring_heron_1755606392
|
forstseh
| 2025-08-19T13:26:00Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic soaring heron",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T13:25:44Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic soaring heron
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755609920
|
lilTAT
| 2025-08-19T13:25:49Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T13:25:45Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ChevalierJoseph/X1
|
ChevalierJoseph
| 2025-08-19T13:25:14Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T13:19:26Z
|
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ChevalierJoseph
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Growcompany/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF
|
Growcompany
| 2025-08-19T13:24:52Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-19T13:24:45Z
|
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- chat
- llama-cpp
- gguf-my-repo
library_name: transformers
---
# Growcompany/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-0.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Growcompany/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Growcompany/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Growcompany/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Growcompany/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q4_k_m.gguf -c 2048
```
|
Ransss/Mystic-Rune-v2-12B-Q8_0-GGUF
|
Ransss
| 2025-08-19T13:24:30Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Vortex5/Mystic-Rune-v2-12B",
"base_model:quantized:Vortex5/Mystic-Rune-v2-12B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T13:23:35Z
|
---
base_model: Vortex5/Mystic-Rune-v2-12B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Ransss/Mystic-Rune-v2-12B-Q8_0-GGUF
This model was converted to GGUF format from [`Vortex5/Mystic-Rune-v2-12B`](https://huggingface.co/Vortex5/Mystic-Rune-v2-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Vortex5/Mystic-Rune-v2-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Ransss/Mystic-Rune-v2-12B-Q8_0-GGUF --hf-file mystic-rune-v2-12b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Ransss/Mystic-Rune-v2-12B-Q8_0-GGUF --hf-file mystic-rune-v2-12b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Ransss/Mystic-Rune-v2-12B-Q8_0-GGUF --hf-file mystic-rune-v2-12b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Ransss/Mystic-Rune-v2-12B-Q8_0-GGUF --hf-file mystic-rune-v2-12b-q8_0.gguf -c 2048
```
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755608064
|
katanyasekolah
| 2025-08-19T13:23:27Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T13:23:22Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Neelectric/Llama-3-8B-Instruct_ins_v00.01
|
Neelectric
| 2025-08-19T13:23:03Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"open-r1",
"sft",
"conversational",
"dataset:Neelectric/ins",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T13:10:33Z
|
---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets: Neelectric/ins
library_name: transformers
model_name: Llama-3-8B-Instruct_ins_v00.01
tags:
- generated_from_trainer
- trl
- open-r1
- sft
licence: license
---
# Model Card for Llama-3-8B-Instruct_ins_v00.01
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the [Neelectric/ins](https://huggingface.co/datasets/Neelectric/ins) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Neelectric/Llama-3-8B-Instruct_ins_v00.01", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/neelectric/sem/runs/f88mnrt5)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755607796
|
milliarderdol
| 2025-08-19T13:22:34Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T13:22:07Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
marcuscedricridia/orbita-tiny-Q4_K_M-GGUF
|
marcuscedricridia
| 2025-08-19T13:21:18Z
| 0
| 0
| null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:NewstaR/orbita-tiny",
"base_model:quantized:NewstaR/orbita-tiny",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-19T13:21:12Z
|
---
base_model: NewstaR/orbita-tiny
tags:
- llama-cpp
- gguf-my-repo
---
# marcuscedricridia/orbita-tiny-Q4_K_M-GGUF
This model was converted to GGUF format from [`NewstaR/orbita-tiny`](https://huggingface.co/NewstaR/orbita-tiny) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NewstaR/orbita-tiny) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo marcuscedricridia/orbita-tiny-Q4_K_M-GGUF --hf-file orbita-tiny-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo marcuscedricridia/orbita-tiny-Q4_K_M-GGUF --hf-file orbita-tiny-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo marcuscedricridia/orbita-tiny-Q4_K_M-GGUF --hf-file orbita-tiny-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo marcuscedricridia/orbita-tiny-Q4_K_M-GGUF --hf-file orbita-tiny-q4_k_m.gguf -c 2048
```
|
Muapi/moxie-cybernetic-punk-lora-s
|
Muapi
| 2025-08-19T13:20:25Z
| 0
| 0
| null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T13:20:14Z
|
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Moxie Cybernetic & Punk Lora's

**Base model**: Flux.1 D
**Trained words**: gypsypunk, gypsy_punk
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:660912@1700169", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755609552
|
lilTAT
| 2025-08-19T13:19:39Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T13:19:35Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rk2357281/llama32-bhojpuri-translator2
|
rk2357281
| 2025-08-19T13:19:18Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T13:11:27Z
|
---
base_model: unsloth/mistral-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** rk2357281
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
unitova/blockassist-bc-zealous_sneaky_raven_1755607924
|
unitova
| 2025-08-19T13:18:31Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T13:18:27Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/velvet-s-epic-dragons-flux
|
Muapi
| 2025-08-19T13:18:29Z
| 0
| 0
| null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T13:18:18Z
|
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Velvet's Epic Dragons | Flux

**Base model**: Flux.1 D
**Trained words**: FluxEpicDragon
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:715643@800301", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/lady-dimitrescu-resident-evil-franchise-flux-sdxl
|
Muapi
| 2025-08-19T13:18:02Z
| 0
| 0
| null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T13:17:53Z
|
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Lady Dimitrescu - Resident Evil Franchise - Flux & SDXL

**Base model**: Flux.1 D
**Trained words**: Alcina Dimitrescu, Black hat, dress
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:441585@867247", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
GFIO/Fish_NC
|
GFIO
| 2025-08-19T13:16:48Z
| 0
| 0
| null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-19T13:16:21Z
|
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Muapi/style-of-h.-r.-giger-flux-295
|
Muapi
| 2025-08-19T13:16:01Z
| 0
| 0
| null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T13:15:53Z
|
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# style of H. R. Giger [FLUX] 295

**Base model**: Flux.1 D
**Trained words**: style of H. R. Giger
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:661699@740501", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/girls-with-guns-cinematic-style-xl-f1d
|
Muapi
| 2025-08-19T13:15:08Z
| 0
| 0
| null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T13:14:50Z
|
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Girls With Guns (cinematic style) XL + F1D

**Base model**: Flux.1 D
**Trained words**:
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:200237@1273747", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
cloudyfall/DeoccAnything
|
cloudyfall
| 2025-08-19T13:14:54Z
| 0
| 0
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T12:16:43Z
|
---
license: apache-2.0
---
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755609232
|
lilTAT
| 2025-08-19T13:14:19Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T13:14:15Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Growcompany/SmolLM2-360M-Q4_K_M-GGUF
|
Growcompany
| 2025-08-19T13:13:55Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:HuggingFaceTB/SmolLM2-360M",
"base_model:quantized:HuggingFaceTB/SmolLM2-360M",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T13:13:51Z
|
---
library_name: transformers
license: apache-2.0
language:
- en
base_model: HuggingFaceTB/SmolLM2-360M
tags:
- llama-cpp
- gguf-my-repo
---
# Growcompany/SmolLM2-360M-Q4_K_M-GGUF
This model was converted to GGUF format from [`HuggingFaceTB/SmolLM2-360M`](https://huggingface.co/HuggingFaceTB/SmolLM2-360M) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolLM2-360M) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Growcompany/SmolLM2-360M-Q4_K_M-GGUF --hf-file smollm2-360m-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Growcompany/SmolLM2-360M-Q4_K_M-GGUF --hf-file smollm2-360m-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Growcompany/SmolLM2-360M-Q4_K_M-GGUF --hf-file smollm2-360m-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Growcompany/SmolLM2-360M-Q4_K_M-GGUF --hf-file smollm2-360m-q4_k_m.gguf -c 2048
```
|
eason668/ecb298de-11b1-498e-8df3-f5ae51558fce-0
|
eason668
| 2025-08-19T13:12:32Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:lmsys/vicuna-7b-v1.3",
"base_model:finetune:lmsys/vicuna-7b-v1.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T07:08:13Z
|
---
base_model: lmsys/vicuna-7b-v1.3
library_name: transformers
model_name: ecb298de-11b1-498e-8df3-f5ae51558fce-0
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for ecb298de-11b1-498e-8df3-f5ae51558fce-0
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="eason668/ecb298de-11b1-498e-8df3-f5ae51558fce-0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sn99/Gradients-On-Demand/runs/w523o948)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1755609008
|
canoplos112
| 2025-08-19T13:12:02Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T13:10:47Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AliaeAI/setfit_nli_v5
|
AliaeAI
| 2025-08-19T13:11:44Z
| 0
| 0
|
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"region:us"
] |
text-classification
| 2025-08-19T13:11:27Z
|
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: I find moments in my day where I carve out time to read, even for just a few
minutes I do not usually feel energetic in the mornings when I wake up really.
Why are you changing topics? [SEP] What problems would you like to put on the
agenda?
- text: Iโve been trying to manage my energy levels, but it feels like an uphill battle
some days. Any tips on balancing work and rest? Itโs really tough to gauge when
I encounter difficulties in managing daily stress. Some days I feel like Iโm handling
it well, but other days, even small tasks seem overwhelming. [SEP] Can you describe
the consistency and appearance of your stools? Have you noticed any changes recently?
- text: Yeah, there are specific foods or drinks that seem to trigger the pain, anything
spicy or greasy sets it off. I've been trying to avoid those. It comes and goes,
usually worse after I eat. Lately, I've also been feeling pretty bloated. [SEP]
Have you noticed any changes in your sleep patterns related to your meal times
or food choices?
- text: I have not noticed any changes in my weight along with experiencing abdominal
pain really, my weight's been pretty stable. I'm more concerned about this constant
painโit's wearing me down mentally too. Sometimes it feels like stress makes it
worse, but it's hard to pinpoint specific triggers. I've been trying to keep a
food diary to see if certain foods make a difference, but so far, no clear patterns.
[SEP] Is the abdominal pain constant, or does it come and go?
- text: It's the little things now, like getting out of the ambulance or even writing
reports that feel like a marathon. I've started avoiding stairs whenever I can.
I definitely need more breaks than before. Even just bending down to check a patient's
vitals can leave me winded these days. [SEP] When you realize you donโt have enough
energy to do what you want, does it leave you feeling annoyed or discouraged?
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>"It's probably just my body's way of dealing with the discomfort. Some days, it feels like everything is a bit too much to handle. I wish I could do more, but the pain often holds me back. It's frustrating when I have to cancel plans or take breaks just to manage it. [SEP] I love running. It's one of the most popular recreational activities in the world. Do you like running?"</li><li>"I can't even finish folding the laundry without needing to sit down. It's the simple things that are getting harder. Iโm sorry, but I have to go. Goodbye. [SEP] It was great talking with you today and hearing about your experiences! Even though things are feeling tough right now, remember that small steps forward can lead to big changes over time. Keep focusing on those little victories and I'm sure you'll find your way back to a place of greater energy. I look forward to our next chat!"</li><li>'hey ok my night [SEP] What is your favorite thing to do in your spare time? What do you like to do for fun?'</li></ul> |
| 1 | <ul><li>'I do not often join in on their game nights as often as I used to. The evenings can be a bit tough these days. They love a good round of Scrabble or Chess. Keeps their minds sharp, they say. [SEP] What are some things that usually help you feel more energized?'</li><li>"The breathlessness comes and goes, but it's definitely worse after treatment. Even just trying to change my clothes can leave me winded. I used to love reading the morning paper, but lately, I can barely focus long enough to get through an article. It's just not the same anymore. [SEP] Is there anything specific about your environment or surroundings that you think might be affecting your concentration?"</li><li>"I'm sorry to hear you're dealing with such unpleasant symptoms. It sounds really challenging. Well, the abdominal pain and diarrhea have been happening for a few weeks now. I've been having trouble keeping food down, and there's been some blood in my stools too. [SEP] that sounds really tough. i'm glad you're getting help though. hopefully you'll start feeling better soon. Could you tell more precisely where the stomach pain is located?"</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("AliaeAI/setfit_nli_v5")
# Run inference
preds = model("I find moments in my day where I carve out time to read, even for just a few minutes I do not usually feel energetic in the mornings when I wake up really. Why are you changing topics? [SEP] What problems would you like to put on the agenda?")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 6 | 55.0309 | 130 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 874 |
| 1 | 872 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (3, 3)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 10
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0009 | 1 | 0.2838 | - |
| 0.0458 | 50 | 0.2677 | - |
| 0.0916 | 100 | 0.2551 | - |
| 0.1374 | 150 | 0.2544 | - |
| 0.1832 | 200 | 0.2463 | - |
| 0.2289 | 250 | 0.253 | - |
| 0.2747 | 300 | 0.2427 | - |
| 0.3205 | 350 | 0.2223 | - |
| 0.3663 | 400 | 0.2129 | - |
| 0.4121 | 450 | 0.1816 | - |
| 0.4579 | 500 | 0.1496 | - |
| 0.5037 | 550 | 0.1176 | - |
| 0.5495 | 600 | 0.0894 | - |
| 0.5952 | 650 | 0.0639 | - |
| 0.6410 | 700 | 0.0575 | - |
| 0.6868 | 750 | 0.043 | - |
| 0.7326 | 800 | 0.0463 | - |
| 0.7784 | 850 | 0.0389 | - |
| 0.8242 | 900 | 0.0272 | - |
| 0.8700 | 950 | 0.0274 | - |
| 0.9158 | 1000 | 0.0299 | - |
| 0.9615 | 1050 | 0.0172 | - |
| 1.0073 | 1100 | 0.0217 | - |
| 1.0531 | 1150 | 0.017 | - |
| 1.0989 | 1200 | 0.0143 | - |
| 1.1447 | 1250 | 0.018 | - |
| 1.1905 | 1300 | 0.0109 | - |
| 1.2363 | 1350 | 0.0153 | - |
| 1.2821 | 1400 | 0.0099 | - |
| 1.3278 | 1450 | 0.012 | - |
| 1.3736 | 1500 | 0.0122 | - |
| 1.4194 | 1550 | 0.0158 | - |
| 1.4652 | 1600 | 0.0141 | - |
| 1.5110 | 1650 | 0.0108 | - |
| 1.5568 | 1700 | 0.0069 | - |
| 1.6026 | 1750 | 0.0071 | - |
| 1.6484 | 1800 | 0.0049 | - |
| 1.6941 | 1850 | 0.0099 | - |
| 1.7399 | 1900 | 0.0076 | - |
| 1.7857 | 1950 | 0.0028 | - |
| 1.8315 | 2000 | 0.0051 | - |
| 1.8773 | 2050 | 0.0027 | - |
| 1.9231 | 2100 | 0.0035 | - |
| 1.9689 | 2150 | 0.0032 | - |
| 2.0147 | 2200 | 0.0034 | - |
| 2.0604 | 2250 | 0.0028 | - |
| 2.1062 | 2300 | 0.002 | - |
| 2.1520 | 2350 | 0.0025 | - |
| 2.1978 | 2400 | 0.0014 | - |
| 2.2436 | 2450 | 0.0014 | - |
| 2.2894 | 2500 | 0.0011 | - |
| 2.3352 | 2550 | 0.0013 | - |
| 2.3810 | 2600 | 0.0013 | - |
| 2.4267 | 2650 | 0.0034 | - |
| 2.4725 | 2700 | 0.0024 | - |
| 2.5183 | 2750 | 0.0014 | - |
| 2.5641 | 2800 | 0.0007 | - |
| 2.6099 | 2850 | 0.0015 | - |
| 2.6557 | 2900 | 0.0007 | - |
| 2.7015 | 2950 | 0.0017 | - |
| 2.7473 | 3000 | 0.0001 | - |
| 2.7930 | 3050 | 0.002 | - |
| 2.8388 | 3100 | 0.0009 | - |
| 2.8846 | 3150 | 0.002 | - |
| 2.9304 | 3200 | 0.0008 | - |
| 2.9762 | 3250 | 0.0013 | - |
### Framework Versions
- Python: 3.11.13
- SetFit: 1.1.3
- Sentence Transformers: 5.1.0
- Transformers: 4.55.2
- PyTorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
arsonor/whisper-tiny-en
|
arsonor
| 2025-08-19T13:11:11Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-19T12:49:24Z
|
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
metrics:
- name: Wer
type: wer
value: 0.3246753246753247
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6649
- Wer Ortho: 0.3245
- Wer: 0.3247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|
| 0.0006 | 17.8571 | 500 | 0.6649 | 0.3245 | 0.3247 |
### Framework versions
- Transformers 4.52.0
- Pytorch 2.6.0+cu124
- Datasets 2.16.0
- Tokenizers 0.21.4
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755608894
|
lilTAT
| 2025-08-19T13:08:45Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T13:08:39Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Neelectric/Llama-3-8B-Instruct-TAR-Cyber_ins_v00.01
|
Neelectric
| 2025-08-19T13:07:47Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"open-r1",
"sft",
"conversational",
"dataset:Neelectric/ins",
"base_model:lapisrocks/Llama-3-8B-Instruct-TAR-Cyber",
"base_model:finetune:lapisrocks/Llama-3-8B-Instruct-TAR-Cyber",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T12:48:05Z
|
---
base_model: lapisrocks/Llama-3-8B-Instruct-TAR-Cyber
datasets: Neelectric/ins
library_name: transformers
model_name: Llama-3-8B-Instruct-TAR-Cyber_ins_v00.01
tags:
- generated_from_trainer
- trl
- open-r1
- sft
licence: license
---
# Model Card for Llama-3-8B-Instruct-TAR-Cyber_ins_v00.01
This model is a fine-tuned version of [lapisrocks/Llama-3-8B-Instruct-TAR-Cyber](https://huggingface.co/lapisrocks/Llama-3-8B-Instruct-TAR-Cyber) on the [Neelectric/ins](https://huggingface.co/datasets/Neelectric/ins) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Neelectric/Llama-3-8B-Instruct-TAR-Cyber_ins_v00.01", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/neelectric/sem/runs/xkobaih7)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Muapi/impressionist-landscape-lora-for-flux
|
Muapi
| 2025-08-19T13:07:10Z
| 0
| 1
| null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T13:06:51Z
|
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Impressionist Landscape LoRA for Flux

**Base model**: Flux.1 D
**Trained words**: impressionist, landscape
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:640459@716306", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
behbudiy/Llama-3.1-8B-Instruct-Uz
|
behbudiy
| 2025-08-19T13:04:26Z
| 972
| 15
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"summarization",
"translation",
"question-answering",
"conversational",
"uz",
"en",
"dataset:yahma/alpaca-cleaned",
"dataset:behbudiy/alpaca-cleaned-uz",
"dataset:behbudiy/translation-instruction",
"license:llama3.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-07-31T05:43:16Z
|
---
license: llama3.1
language:
- uz
- en
base_model: models/Meta-Llama-3.1-8B-Instruct
library_name: transformers
tags:
- llama
- text-generation-inference
- summarization
- translation
- question-answering
datasets:
- yahma/alpaca-cleaned
- behbudiy/alpaca-cleaned-uz
- behbudiy/translation-instruction
metrics:
- bleu
- comet
- accuracy
pipeline_tag: text-generation
---
### Model Description
The LLaMA-3.1-8B-Instruct-Uz model has been instruction-tuned using a mix of publicly available and syntheticly constructed Uzbek and English data to preserve its original knowledge while enhancing its capabilities. This model is designed to support various natural language processing tasks in Uzbek, such as machine translation, summarization, and dialogue systems, ensuring robust performance across these applications.
- **Developed by:**
- [Eldor Fozilov](https://www.linkedin.com/in/eldor-fozilov/)
- [Azimjon Urinov](https://azimjonn.github.io/)
- [Khurshid Juraev](https://kjuraev.com/)
๐ **Performance Comparison:**
| Model Name | BLEU Uz-En (One-shot) | BLEU En-Uz (One-shot) | COMET (Uz-En) | COMET (Ez-Un) | Uzbek Sentiment Analysis | Uzbek News Classification | MMLU (English) (5-shot) |
|------------------------|------------------------------------|------------------------------------|--------------------------|----------------|----------------|-------------|-------------------|
| **Llama-3.1 8B Instruct** | 23.74 | 6.72 | 84.30 | 82.70 | 68.96 | 55.41 | 65.77
| **Llama-3.1 8B Instruct Uz** | 27.42 | 11.58 | 85.63 | 86.53 | 82.42 | 60.84 | 62.78
| **Mistral 7B Instruct** | 7.47 | 0.67 | 68.14 | 45.58 | 62.02 | 47.52 | 61.07
| **Mistral 7B Instruct Uz** | 29.39 | 16.77 | 86.91 |88.75 | 79.13 | 59.38 | 55.72
| **Mistral Nemo Instruct** | 25.68 | 9.79 | 85.56 | 85.04 | 72.47 | 49.24 |67.62
| **Mistral Nemo Instruct Uz** | 30.49 | 15.52 | 87.04 | 88.01 | 82.05 | 58.2 | 67.36
| **Google Translate** | 41.18 | 22.98 | 89.16 | 90.67 | โ | โ | โ |
The results show that Uzbek-optimized models consistently outperform their base counterparts in translation benchmarks (BLEU and COMET) on the FLORES+ Uz-En / En-Uz evaluation datasets, sentiment analysis and news classification in Uzbek language.
Also, on the MMLU benchmark, which measures general language understanding across multiple tasks in English, the finetuned models did not show significant decline. (The base Llama modelโs MMLU score differs from the official score due to our evaluation method. Refer to the links below to see evaluation details.)
Looking ahead, these models are just **early versions**. We are actively working on further improving our data curation and fine-tuning method to provide even better results in the near future. In addition, we will scale up the dataset size both for continual-pretraining and instruction-tuning, and also customize other strong open-source LLMs for Uzbek language.
Weโre eager to see how these models will be used by our Uzbek ๐บ๐ฟ community and look forward to continuing this work. ๐
## How to use
The Llama-3.1-8B-Instruct-Uz model can be used with transformers and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import transformers
import torch
model_id = "behbudiy/Llama-3.1-8B-Instruct-Uz"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "Berilgan gap bo'yicha hissiyot tahlilini bajaring."},
{"role": "user", "content": "Men bu filmni yaxshi ko'raman!"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
## Information on Evaluation Method
To evaluate on the translation task, we used FLORES+ Uz-En / En-Uz datasets, where we merged the dev and test sets to create a bigger evaluation data for each Uz-En and En-Uz subsets.
We used the following prompt to do one-shot Uz-En evaluation both for the base model and Uzbek-optimized model (for En-Uz eval, we changed the positions of the words "English" and "Uzbek").
```python
prompt = f'''You are a professional Uzbek-English translator. Your task is to accurately translate the given Uzbek text into English.
Instructions:
1. Translate the text from Uzbek to English.
2. Maintain the original meaning and tone.
3. Use appropriate English grammar and vocabulary.
4. If you encounter an ambiguous or unfamiliar word, provide the most likely translation based on context.
5. Output only the English translation, without any additional comments.
Example:
Uzbek: "Bugun ob-havo juda yaxshi, quyosh charaqlab turibdi."
English: "The weather is very nice today, the sun is shining brightly."
Now, please translate the following Uzbek text into English:
"{sentence}"
'''
```
To assess the model's ability in Uzbek sentiment analysis, we used the **risqaliyevds/uzbek-sentiment-analysis** dataset, for which we created binary labels (0: Negative, 1: Positive) using GPT-4o API (refer to **behbudiy/uzbek-sentiment-analysis** dataset).
We used the following prompt for the evaluation:
```python
prompt = f'''Given the following text, determine the sentiment as either 'Positive' or 'Negative.' Respond with only the word 'Positive' or 'Negative' without any additional text or explanation.
Text: {text}"
'''
```
For Uzbek News Classification, we used **risqaliyevds/uzbek-zero-shot-classification** dataset and asked the model to predict the category of the news using the following prompt:
```python
prompt = f'''Classify the given Uzbek news article into one of the following categories. Provide only the category number as the answer.
Categories:
0 - Politics (Siyosat)
1 - Economy (Iqtisodiyot)
2 - Technology (Texnologiya)
3 - Sports (Sport)
4 - Culture (Madaniyat)
5 - Health (Salomatlik)
6 - Family and Society (Oila va Jamiyat)
7 - Education (Ta'lim)
8 - Ecology (Ekologiya)
9 - Foreign News (Xorijiy Yangiliklar)
Now classify this article:
"{text}"
Answer (number only):"
'''
```
On MMLU, we performed 5-shot evaluation using the following **template** and extracted the first token generated by the model for measuring accuracy:
```python
template = "The following are multiple choice questions (with answers) about [subject area].
[Example question 1]
A. text
B. text
C. text
D. text
Answer: [Correct answer letter]
.
.
.
[Example question 5]
A. text
B. text
C. text
D. text
Answer: [Correct answer letter]
Now, let's think step by step and then provide only the letter corresponding to the correct answer for the below question, without any additional explanation or comments.
[Actual MMLU test question]
A. text
B. text
C. text
D. text
Answer:"
```
## More
For more details and examples, refer to the base model below:
https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct
|
koloni/blockassist-bc-deadly_graceful_stingray_1755606803
|
koloni
| 2025-08-19T13:02:02Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T13:01:58Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755606829
|
kojeklollipop
| 2025-08-19T13:01:41Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T13:01:37Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
comic-snows/my_awesome_qa_model
|
comic-snows
| 2025-08-19T13:01:19Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-19T13:00:08Z
|
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF
|
mradermacher
| 2025-08-19T13:00:10Z
| 56
| 0
|
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Qwen/Qwen3-Coder-480B-A35B-Instruct",
"base_model:quantized:Qwen/Qwen3-Coder-480B-A35B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-07-30T05:12:43Z
|
---
base_model: Qwen/Qwen3-Coder-480B-A35B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct/blob/main/LICENSE
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-Coder-480B-A35B-Instruct-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.imatrix.gguf) | imatrix | 0.7 | imatrix file (for creating your own qwuants) |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ1_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ1_S.gguf.part2of2) | i1-IQ1_S | 97.5 | for the desperate |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ1_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ1_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ1_M.gguf.part3of3) | i1-IQ1_M | 108.2 | mostly desperate |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ2_XXS.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ2_XXS.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ2_XXS.gguf.part3of3) | i1-IQ2_XXS | 126.0 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ2_XS.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ2_XS.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ2_XS.gguf.part3of3) | i1-IQ2_XS | 140.3 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ2_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ2_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ2_S.gguf.part3of3) | i1-IQ2_S | 142.9 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ2_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ2_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ2_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ2_M.gguf.part4of4) | i1-IQ2_M | 157.1 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q2_K_S.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q2_K_S.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q2_K_S.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q2_K_S.gguf.part4of4) | i1-Q2_K_S | 162.6 | very low quality |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q2_K.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q2_K.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q2_K.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q2_K.gguf.part4of4) | i1-Q2_K | 174.8 | IQ3_XXS probably better |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_XXS.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_XXS.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_XXS.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_XXS.gguf.part4of4) | i1-IQ3_XXS | 184.4 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_XS.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_XS.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_XS.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_XS.gguf.part4of4) | i1-IQ3_XS | 195.7 | |
| [P1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q3_K_S.gguf.part1of5) [P2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q3_K_S.gguf.part2of5) [P3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q3_K_S.gguf.part3of5) [P4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q3_K_S.gguf.part4of5) [P5](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q3_K_S.gguf.part5of5) | i1-Q3_K_S | 207.0 | IQ3_XS probably better |
| [P1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_S.gguf.part1of5) [P2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_S.gguf.part2of5) [P3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_S.gguf.part3of5) [P4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_S.gguf.part4of5) [P5](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_S.gguf.part5of5) | i1-IQ3_S | 207.1 | beats Q3_K* |
| [P1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_M.gguf.part1of5) [P2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_M.gguf.part2of5) [P3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_M.gguf.part3of5) [P4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_M.gguf.part4of5) [P5](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ3_M.gguf.part5of5) | i1-IQ3_M | 210.0 | |
| [P1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q3_K_M.gguf.part1of5) [P2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q3_K_M.gguf.part2of5) [P3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q3_K_M.gguf.part3of5) [P4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q3_K_M.gguf.part4of5) [P5](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q3_K_M.gguf.part5of5) | i1-Q3_K_M | 229.3 | IQ3_S probably better |
| [P1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q3_K_L.gguf.part1of6) [P2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q3_K_L.gguf.part2of6) [P3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q3_K_L.gguf.part3of6) [P4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q3_K_L.gguf.part4of6) [P5](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q3_K_L.gguf.part5of6) [P6](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q3_K_L.gguf.part6of6) | i1-Q3_K_L | 248.5 | IQ3_M probably better |
| [P1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ4_XS.gguf.part1of6) [P2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ4_XS.gguf.part2of6) [P3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ4_XS.gguf.part3of6) [P4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ4_XS.gguf.part4of6) [P5](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ4_XS.gguf.part5of6) [P6](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-IQ4_XS.gguf.part6of6) | i1-IQ4_XS | 255.7 | |
| [P1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_0.gguf.part1of6) [P2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_0.gguf.part2of6) [P3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_0.gguf.part3of6) [P4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_0.gguf.part4of6) [P5](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_0.gguf.part5of6) [P6](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_0.gguf.part6of6) | i1-Q4_0 | 271.7 | fast, low quality |
| [P1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_K_S.gguf.part1of6) [P2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_K_S.gguf.part2of6) [P3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_K_S.gguf.part3of6) [P4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_K_S.gguf.part4of6) [P5](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_K_S.gguf.part5of6) [P6](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_K_S.gguf.part6of6) | i1-Q4_K_S | 272.9 | optimal size/speed/quality |
| [P1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_K_M.gguf.part1of6) [P2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_K_M.gguf.part2of6) [P3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_K_M.gguf.part3of6) [P4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_K_M.gguf.part4of6) [P5](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_K_M.gguf.part5of6) [P6](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_K_M.gguf.part6of6) | i1-Q4_K_M | 290.2 | fast, recommended |
| [P1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_1.gguf.part1of7) [P2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_1.gguf.part2of7) [P3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_1.gguf.part3of7) [P4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_1.gguf.part4of7) [P5](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_1.gguf.part5of7) [P6](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_1.gguf.part6of7) [P7](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q4_1.gguf.part7of7) | i1-Q4_1 | 300.6 | |
| [P1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q5_K_S.gguf.part1of7) [P2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q5_K_S.gguf.part2of7) [P3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q5_K_S.gguf.part3of7) [P4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q5_K_S.gguf.part4of7) [P5](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q5_K_S.gguf.part5of7) [P6](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q5_K_S.gguf.part6of7) [P7](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q5_K_S.gguf.part7of7) | i1-Q5_K_S | 330.5 | |
| [P1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q5_K_M.gguf.part1of7) [P2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q5_K_M.gguf.part2of7) [P3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q5_K_M.gguf.part3of7) [P4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q5_K_M.gguf.part4of7) [P5](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q5_K_M.gguf.part5of7) [P6](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q5_K_M.gguf.part6of7) [P7](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q5_K_M.gguf.part7of7) | i1-Q5_K_M | 340.6 | |
| [P1](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q6_K.gguf.part1of8) [P2](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q6_K.gguf.part2of8) [P3](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q6_K.gguf.part3of8) [P4](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q6_K.gguf.part4of8) [P5](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q6_K.gguf.part5of8) [P6](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q6_K.gguf.part6of8) [P7](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q6_K.gguf.part7of8) [P8](https://huggingface.co/mradermacher/Qwen3-Coder-480B-A35B-Instruct-i1-GGUF/resolve/main/Qwen3-Coder-480B-A35B-Instruct.i1-Q6_K.gguf.part8of8) | i1-Q6_K | 394.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Osrivers/Wan2.2-Lightning
|
Osrivers
| 2025-08-19T12:59:08Z
| 0
| 0
| null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-08-19T12:52:21Z
|
---
license: creativeml-openrail-m
---
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755606847
|
sampingkaca72
| 2025-08-19T12:58:53Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:58:50Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ThomET/MyGemmaNPC
|
ThomET
| 2025-08-19T12:57:18Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T12:54:05Z
|
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ThomET/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755608110
|
lilTAT
| 2025-08-19T12:55:37Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:55:34Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BjarneNPO/finetune_19_08_2025_12_45_15
|
BjarneNPO
| 2025-08-19T12:55:30Z
| 0
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:19964",
"loss:MultipleNegativesRankingLoss",
"dataset:NPOA/Bjarne-Bachelorarbeit",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-19T12:55:12Z
|
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:19964
- loss:MultipleNegativesRankingLoss
base_model: FacebookAI/xlm-roberta-large
widget:
- source_sentence: bei einem kann keine hinterlegt werden
sentences:
- An einem Tag gab es im August eine รberbelegung, einmal erklรคrt wie sie diese
nachvollziehen kann.
- Fehlermeldung weist auf eine fehlende BI hin. Anwenderin stimmt sich dazu mit
ab.
- "Ticket\r\n---------------------------\r\nExport angepasst - informiert\r\n--------------------------\r\
\nUser mรถchte auch in der รผbergreifenden Personalliste die Anpassung umgesetzt\
\ haben - daher Ticket erneut geรถffnet\r\n- รผbergreifender Export ebenfalls angepasst\
\ - informiert"
- source_sentence: Userin darf erst am 01.02.2024 die Vertragsangebote rausschicken,
mรถchte aber schonmal vermerken, welchen Kindern sie ein Vertragsangebot schicken
mรถchte.
sentences:
- Das ist noch nicht freigeschaltet. Genauer Zeitpunkt steht auch noch nicht fest.
- "Kind muss manuell angelegt werden und dann neu synchronisiert und Anmeldedaten\
\ zusammenfรผhren.\r\nDa Userin weiterhin Anmeldedaten nicht zusammenfรผhren kann\
\ Userin gebeten uns einen Screenshot aus dem Kita-Navigator zukommen zu lassen.\r\
\nBeide Kinder wurden nun รผbertragen und befinden sich unter Vetragsangeboten."
- Kann die Kinder auf die Planungsliste nehmen, dann sieht sie diese sowohl in der
Planungsliste, als auch in der Liste der Anmeldungen mit dem Symbol in der Anmeldeliste.
- source_sentence: Fehlermeldung beim Erstellen der Datei.
sentences:
- In der Benutzerverwaltung unter Verwaltung.
- Bei einer Kollegin musste noch die Stundenanzahl unter Ausbildung und Statistik
eingetragen werden.
- "Wurde an den Entwickler weitergegeben.\r\nProblem konnte behoben werden, Benutzer\
\ wurde informiert."
- source_sentence: mรถchte wissen wenn ein Kind gestern letzmalig in der Kita war,
welches Entlassdatum muss im System eingetragen werden?
sentences:
- Fehler bereist bekannt, prรผft spรคter erneut.
- Aktuell wurde uns noch nicht gemeldet, dass wir das Jugendamt freischalten sollen.
- Der letzte Betreuungstag muss als Entlassdatum hinterlegt werden, da sonst die
BI nicht stimmt.
- source_sentence: Login mit dem Authenticator funktioniert nicht mehr, Code ist immer
ungรผltig
sentences:
- Erneut die Tรคtigkeit gelรถscht und neu รbertragen, die Tรคtigkeit wurde aber nicht
erneut angezeigt
- Nachdem die Uhrzeit neu synchronisiert war konnte sie sich wieder einloggen.
- Dies entspricht der Vorlage. muss Vorlage anpassen.
datasets:
- NPOA/Bjarne-Bachelorarbeit
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on FacebookAI/xlm-roberta-large
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the [bjarne-bachelorarbeit](https://huggingface.co/datasets/NPOA/Bjarne-Bachelorarbeit) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) <!-- at revision c23d21b0620b635a76227c604d44e43a9f0ee389 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [bjarne-bachelorarbeit](https://huggingface.co/datasets/NPOA/Bjarne-Bachelorarbeit)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("BjarneNPO/finetune_19_08_2025_12_45_15")
# Run inference
queries = [
"Login mit dem Authenticator funktioniert nicht mehr, Code ist immer ung\u00fcltig",
]
documents = [
'Nachdem die Uhrzeit neu synchronisiert war konnte sie sich wieder einloggen.',
'Erneut die Tรคtigkeit gelรถscht und neu รbertragen, die Tรคtigkeit wurde aber nicht erneut angezeigt',
'Dies entspricht der Vorlage. muss Vorlage anpassen.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 1024] [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.6199, 0.3746, 0.3027]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### bjarne-bachelorarbeit
* Dataset: [bjarne-bachelorarbeit](https://huggingface.co/datasets/NPOA/Bjarne-Bachelorarbeit) at [273f1a5](https://huggingface.co/datasets/NPOA/Bjarne-Bachelorarbeit/tree/273f1a515b2a1731a04a643cf39bd217d61a02a0)
* Size: 19,964 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 27.66 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 22.87 tokens</li><li>max: 151 tokens</li></ul> |
* Samples:
| query | answer |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------|
| <code>Wie kann man die Jahresurlaubsรผbersicht exportieren?</code> | <code>รผber das 3 Punkte Menรผ rechts oben. Mitarbeiter auswรคhlen und exportieren</code> |
| <code>1. Vertragsabschlรผsse werden nicht รผbertragen
<br>2. Kinder kommen nicht von nach
<br>3. Absage kann bei Portalstatus nicht erstellt werden.</code> | <code>Ticket
<br>Userin gebeten sich an den Support zu wenden, da der Fehler liegt.</code> |
| <code>Wird im Anmeldeportal nicht gefunden.</code> | <code>Die Schnittstelle war noch nicht aktiviert und Profil ebenfalls nicht.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Evaluation Dataset
#### bjarne-bachelorarbeit
* Dataset: [bjarne-bachelorarbeit](https://huggingface.co/datasets/NPOA/Bjarne-Bachelorarbeit) at [273f1a5](https://huggingface.co/datasets/NPOA/Bjarne-Bachelorarbeit/tree/273f1a515b2a1731a04a643cf39bd217d61a02a0)
* Size: 8,557 evaluation samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 26.49 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 23.16 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | answer |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Liebes Support Team!
<br>In unserer Kst. fiel der EL auf, dass es in der Urlaubsรผbersicht Unstimmigkeiten gibt. So werden z.B. bei der Kollegin 60 offene Tage angezeigt und im Detail (Jahresรผbersicht) korrekt alle eingetragenen Tage und nur 2 Tage Rest!
<br>Ich freue mich auf Ihre Rรผckmeldung.
<br>Mit besten Grรผรen
<br>_________________________________________________
<br>Leitung Kompetenzteam
<br>Geschรคftsfeld Kindertageseinrichtungen
<br> ()
<br> e.V.
<br>. 280
<br>33605
<br>Telefon: Mo.+Mi. +49 521 9216-129 Di., Do. + Fr. +49 5264 6559100
<br>E-Mail:
<br>Web: www.awo-owl.de
<br>Instagram: www.instagram.com/
<br>Facebook: www.facebook.com/
<br>Vorsitzende des Prรคsidiums und des Aufsichtsrates:
<br>Vorstand: (Vors.),
<br>Amtsgericht VR 1151
<br>Diese E-Mail einschlieรlich evtl. angehรคngter Dateien enthรคlt vertrauliche und/oder rechtlich geschรผtzte Informationen. Wenn Sie nicht der Adressat sind und diese E-Mail irrtรผmlich erhalten haben, dรผrfen Sie weder den Inhalt dieser E-Mail nutzen, noch dรผrfen Sie die eventuell angehรคngten Datei...</code> | <code>Problem ist bekannt und wird im Verlauf des Tages behoben.</code> |
| <code>hat im einen Vertrag, aber wurde nicht nach รผbertragen. war wegen fehlender Anbindung auf der Schnittstelle nicht auf der Anmeldeliste.</code> | <code>Kind muss manuell angelegt werden und dann neu synchronisiert und Anmeldedaten zusammenfรผhren.
<br>Da Userin weiterhin Anmeldedaten nicht zusammenfรผhren kann Userin gebeten uns einen Screenshot aus dem Kita-Navigator zukommen zu lassen.
<br>Beide Kinder wurden nun รผbertragen und befinden sich unter Vetragsangeboten.</code> |
| <code>Wie kann ein Kind aus den zukรผnftigen Neuaufnahmen gelรถscht werden?</code> | <code>Benutzer muss erst die BI und kann dann รผber den Button Statuswechsel durchfรผhren das ganze Kind lรถschen.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 8
- `learning_rate`: 2e-05
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: False
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:-------:|:-------------:|:---------------:|
| 0.0641 | 10 | 2.772 | - |
| 0.1282 | 20 | 2.7656 | - |
| 0.1923 | 30 | 2.7448 | - |
| 0.2564 | 40 | 2.674 | - |
| 0.3205 | 50 | 2.5086 | - |
| 0.3846 | 60 | 2.3308 | - |
| 0.4487 | 70 | 2.0376 | - |
| 0.5128 | 80 | 1.9653 | - |
| 0.5769 | 90 | 1.9202 | - |
| 0.6410 | 100 | 1.7578 | - |
| 0.7051 | 110 | 1.6882 | - |
| 0.7692 | 120 | 1.6155 | - |
| 0.8333 | 130 | 1.5431 | - |
| 0.8974 | 140 | 1.4487 | - |
| 0.9615 | 150 | 1.4125 | - |
| 1.0 | 156 | - | 1.3032 |
| 1.0256 | 160 | 1.3047 | - |
| 1.0897 | 170 | 1.2717 | - |
| 1.1538 | 180 | 1.2822 | - |
| 1.2179 | 190 | 1.243 | - |
| 1.2821 | 200 | 1.2183 | - |
| 1.3462 | 210 | 1.1533 | - |
| 1.4103 | 220 | 1.1534 | - |
| 1.4744 | 230 | 1.1748 | - |
| 1.5385 | 240 | 1.0993 | - |
| 1.6026 | 250 | 1.1418 | - |
| 1.6667 | 260 | 1.0975 | - |
| 1.7308 | 270 | 1.0359 | - |
| 1.7949 | 280 | 1.0728 | - |
| 1.8590 | 290 | 0.9835 | - |
| 1.9231 | 300 | 0.9846 | - |
| 1.9872 | 310 | 0.9811 | - |
| 2.0 | 312 | - | 0.9966 |
| 2.0513 | 320 | 0.8722 | - |
| 2.1154 | 330 | 0.8756 | - |
| 2.1795 | 340 | 0.9337 | - |
| 2.2436 | 350 | 0.9512 | - |
| 2.3077 | 360 | 0.915 | - |
| 2.3718 | 370 | 0.8729 | - |
| 2.4359 | 380 | 0.877 | - |
| 2.5 | 390 | 0.8838 | - |
| 2.5641 | 400 | 0.8603 | - |
| 2.6282 | 410 | 0.9071 | - |
| 2.6923 | 420 | 0.8661 | - |
| 2.7564 | 430 | 0.8705 | - |
| 2.8205 | 440 | 0.8752 | - |
| 2.8846 | 450 | 0.8926 | - |
| 2.9487 | 460 | 0.7818 | - |
| **3.0** | **468** | **-** | **0.9536** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.55.2
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
lucienbaumgartner/moralizedMP
|
lucienbaumgartner
| 2025-08-19T12:55:19Z
| 4
| 0
|
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2025-08-14T14:27:31Z
|
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: if it is raining, as was stated, then it is irrelevant what someone thinks
abut whether or not it is raining. it is raining. therefore, the statement was
nonsensical.
- text: the first part of the sentence was a fact but the second half was sally's
opinion
- text: because on one hand it is but actually not a long term solution
- text: it contradicted itself
- text: cyberbully may seem cruel to everyone, but to tom, he does not feel cruel
to him.
metrics:
- accuracy
- precision
- recall
- f1
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.868421052631579
name: Accuracy
- type: precision
value: 0.5642857142857144
name: Precision
- type: recall
value: 0.5629370629370629
name: Recall
- type: f1
value: 0.562610229276896
name: F1
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:-----------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Linguistic (in)felicity | <ul><li>'because the second statement negates what was stated in the first part of the sentence'</li><li>'there is a logic conflict in the statement that renders it bizarre and nonsensical.'</li><li>'there was a contradiction of statements if read at face value, however, it could be read that being homeless is not right in which case the statement would make sense. it is unclear.'</li></ul> |
| Enrichment / reinterpretation | <ul><li>'the statement recognised the objective compassion but the opinion contradicted it'</li><li>"because while it is compassionate to help the homeless people don't always do it out of compassion."</li><li>'it could be the way how homeless are helped. there could be better ways to handle that'</li></ul> |
| Lack of understanding / clear misunderstanding | <ul><li>'it simply sounded stupid. i doubt it makes any sense'</li><li>'it statement didnt make any sense, for us to better understand, tom needs to further explain his reason for stating why its not cruel after first saying it is'</li><li>'it sounds very contradictory'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy | Precision | Recall | F1 |
|:--------|:---------|:----------|:-------|:-------|
| **all** | 0.8684 | 0.5643 | 0.5629 | 0.5626 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("it contradicted itself")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 16.6447 | 92 |
| Label | Training Sample Count |
|:-----------------------------------------------|:----------------------|
| Enrichment / reinterpretation | 31 |
| Lack of understanding / clear misunderstanding | 10 |
| Linguistic (in)felicity | 111 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (10, 10)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 3786
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0026 | 1 | 0.2539 | - |
| 0.1316 | 50 | 0.2248 | - |
| 0.2632 | 100 | 0.1681 | - |
| 0.3947 | 150 | 0.0854 | - |
| 0.5263 | 200 | 0.0128 | - |
| 0.6579 | 250 | 0.0074 | - |
| 0.7895 | 300 | 0.0017 | - |
| 0.9211 | 350 | 0.0021 | - |
| 1.0526 | 400 | 0.0024 | - |
| 1.1842 | 450 | 0.0004 | - |
| 1.3158 | 500 | 0.0011 | - |
| 1.4474 | 550 | 0.0016 | - |
| 1.5789 | 600 | 0.0003 | - |
| 1.7105 | 650 | 0.0002 | - |
| 1.8421 | 700 | 0.0002 | - |
| 1.9737 | 750 | 0.0002 | - |
| 2.1053 | 800 | 0.0002 | - |
| 2.2368 | 850 | 0.0002 | - |
| 2.3684 | 900 | 0.0002 | - |
| 2.5 | 950 | 0.0001 | - |
| 2.6316 | 1000 | 0.0001 | - |
| 2.7632 | 1050 | 0.0001 | - |
| 2.8947 | 1100 | 0.0001 | - |
| 3.0263 | 1150 | 0.0001 | - |
| 3.1579 | 1200 | 0.0001 | - |
| 3.2895 | 1250 | 0.0001 | - |
| 3.4211 | 1300 | 0.0001 | - |
| 3.5526 | 1350 | 0.0001 | - |
| 3.6842 | 1400 | 0.0001 | - |
| 3.8158 | 1450 | 0.0001 | - |
| 3.9474 | 1500 | 0.0001 | - |
| 4.0789 | 1550 | 0.0001 | - |
| 4.2105 | 1600 | 0.0001 | - |
| 4.3421 | 1650 | 0.0001 | - |
| 4.4737 | 1700 | 0.0001 | - |
| 4.6053 | 1750 | 0.0001 | - |
| 4.7368 | 1800 | 0.0001 | - |
| 4.8684 | 1850 | 0.0001 | - |
| 5.0 | 1900 | 0.0001 | - |
| 5.1316 | 1950 | 0.0001 | - |
| 5.2632 | 2000 | 0.0001 | - |
| 5.3947 | 2050 | 0.0001 | - |
| 5.5263 | 2100 | 0.0001 | - |
| 5.6579 | 2150 | 0.0001 | - |
| 5.7895 | 2200 | 0.0001 | - |
| 5.9211 | 2250 | 0.0001 | - |
| 6.0526 | 2300 | 0.0001 | - |
| 6.1842 | 2350 | 0.0001 | - |
| 6.3158 | 2400 | 0.0001 | - |
| 6.4474 | 2450 | 0.0001 | - |
| 6.5789 | 2500 | 0.0001 | - |
| 6.7105 | 2550 | 0.0001 | - |
| 6.8421 | 2600 | 0.0001 | - |
| 6.9737 | 2650 | 0.0001 | - |
| 7.1053 | 2700 | 0.0001 | - |
| 7.2368 | 2750 | 0.0001 | - |
| 7.3684 | 2800 | 0.0001 | - |
| 7.5 | 2850 | 0.0001 | - |
| 7.6316 | 2900 | 0.0001 | - |
| 7.7632 | 2950 | 0.0001 | - |
| 7.8947 | 3000 | 0.0001 | - |
| 8.0263 | 3050 | 0.0001 | - |
| 8.1579 | 3100 | 0.0001 | - |
| 8.2895 | 3150 | 0.0001 | - |
| 8.4211 | 3200 | 0.0001 | - |
| 8.5526 | 3250 | 0.0001 | - |
| 8.6842 | 3300 | 0.0001 | - |
| 8.8158 | 3350 | 0.0001 | - |
| 8.9474 | 3400 | 0.0012 | - |
| 9.0789 | 3450 | 0.0003 | - |
| 9.2105 | 3500 | 0.0001 | - |
| 9.3421 | 3550 | 0.0001 | - |
| 9.4737 | 3600 | 0.0001 | - |
| 9.6053 | 3650 | 0.0001 | - |
| 9.7368 | 3700 | 0.0001 | - |
| 9.8684 | 3750 | 0.0001 | - |
| 10.0 | 3800 | 0.0 | - |
### Framework Versions
- Python: 3.11.9
- SetFit: 1.1.3
- Sentence Transformers: 5.1.0
- Transformers: 4.55.2
- PyTorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
gaoyang07/XY_Tokenizer
|
gaoyang07
| 2025-08-19T12:53:55Z
| 0
| 0
| null |
[
"pytorch",
"xy_tokenizer",
"arxiv:2506.23325",
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T12:07:08Z
|
---
license: apache-2.0
---
# **Introduction**
**`XY-Tokenizer`** is a speech codec that simultaneously models both semantic and acoustic aspects of speech, converting audio into discrete tokens and decoding them back to high-quality audio. It achieves efficient speech representation at only 1kbps with RVQ8 quantization at 12.5Hz frame rate.
- **Paper:** [Read on arXiv](https://arxiv.org/abs/2506.23325)
- **Source Code:**
- [GitHub Repo](https://github.com/OpenMOSS/MOSS-TTSD/tree/main/XY_Tokenizer)
- [Hugging Face Repo](https://huggingface.co/spaces/fnlp/MOSS-TTSD/tree/main/XY_Tokenizer)
## ๐ Related Project: **[MOSS-TTSD](https://huggingface.co/fnlp/MOSS-TTSD-v0.5)**
**`XY-Tokenizer`** serves as the underlying neural codec for **`MOSS-TTSD`**, our 1.7B Audio Language Model. \
Explore **`MOSS-TTSD`** for advanced text-to-speech and other audio generation tasks on [GitHub](https://github.com/OpenMOSS/MOSS-TTSD), [Blog](http://www.open-moss.com/en/moss-ttsd/), [ๅๅฎข](https://www.open-moss.com/cn/moss-ttsd/), and [Space Demo](https://huggingface.co/spaces/fnlp/MOSS-TTSD).
## โจ Features
- **Dual-channel modeling**: Simultaneously captures semantic meaning and acoustic details
- **Efficient representation**: 1kbps bitrate with RVQ8 quantization at 12.5Hz
- **High-quality audio tokenization**: Convert speech to discrete tokens and back with minimal quality loss
- **Long audio support**: Process audio files longer than 30 seconds using chunking with overlap
- **Batch processing**: Efficiently process multiple audio files in batches
- **24kHz output**: Generate high-quality 24kHz audio output
## ๐ Installation
```bash
git clone https://github.com/OpenMOSS/MOSS-TTSD.git
cd MOSS-TTSD
conda create -n xy_tokenizer python=3.10 -y && conda activate xy_tokenizer
pip install -r XY_Tokenizer/requirements.txt
```
## ๐ป Quick Start
Here's how to use **`XY-Tokenizer`** with `transformers` to encode an audio file into discrete tokens and decode it back into a waveform.
```python
import os
import torchaudio
from transformers import AutoModelForCausalLM
from transformers.models.moss_ttsd.processor_moss_ttsd import MossTTSDProcessor
processor = MossTTSDProcessor.from_pretrained(
"fnlp/MOSS-TTSD-v0.5",
codec_path="gaoyang07/XY_Tokenizer",
trust_remote_code=True
)
model = AutoModelForCausalLM.from_pretrained(
"fnlp/MOSS-TTSD-v0.5",
trust_remote_code=True
).eval()
data = [{
"base_path": "./examples",
"text": "[S1]ๅๅ
009๏ผไฝ ๅฐๅบ่ฝไธ่ฝๅฅฝๅฅฝๅทฅไฝ๏ผๆๅไฝ ไธๅฅ๏ผ่ฟไธชๆถไปฃ๏ผไธ่ทไธAIๆตชๆฝฎ๏ผๅฐฑไผ่ขซๅฝปๅบๆทๆฑฐ๏ผ[S2]่ฟไธชๅ๏ผ้ฃๆๅพๅ
้ฎ้ฎ็ก
ๅบไนไธป",
"system_prompt": "ไฝ ๆฏไธไธชๆ นๆฎๆๆฌ็ๆๅฏนๅบ้ณ้ข็่ฏญ้ณๅๆๅจใ",
"prompt_text": "[S1]ๅๅญ๏ผไฝ ๅฌๅ็๏ผไฝ ๅฌๅ็๏ผๅ
ถๅฎไฝ ่ทๆๆไบบPK๏ผๆ็ๆถๅๆไนๅจ็๏ผๆไนๅจ็๏ผๆ ้ไธค๏ผไธคไปถไบ๏ผไธไธชๆฏ้ขๅญ๏ผไธๆณ่พใ[S2]ไฝ ๅซ่ฏด๏ผ้ฃๅคฉๆฝ่ๅธๆไธไธชๅพๅผๅผ็ดๆญ๏ผ็ปๆๅผไธๅบ๏ผๆฝ่ๅธไธๅพๅผๅผ็ดๆญ็ปๆๅผไธๅบ๏ผ็ปๆไธ้กฟ้ชใ",
"prompt_audio": "panchangjiang_gazi.wav",
}]
# Try to use the ExtractorIterator as an iterator
print("Trying iterator approach...", flush=True)
inputs = processor(data)
token_ids = model.generate(input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"])
text, audios = processor.batch_decode(token_ids)
if not os.path.exists("outputs/"):
os.mkdir("outputs/")
for i, data in enumerate(audios):
for j, fragment in enumerate(data):
print(f"Saving audio_{i}_{j}.wav...", flush=True)
torchaudio.save(f"outputs/audio_{i}_{j}.wav", fragment.cpu(), 24000)
```
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755606252
|
thanobidex
| 2025-08-19T12:53:22Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:53:18Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1755607240
|
canoplos112
| 2025-08-19T12:51:17Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:50:02Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755606148
|
ihsanridzi
| 2025-08-19T12:50:25Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:50:21Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/80_xuruTx
|
VoilaRaj
| 2025-08-19T12:48:40Z
| 0
| 0
| null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T12:44:52Z
|
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755605825
|
katanyasekolah
| 2025-08-19T12:47:46Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:47:43Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Mostefa-Terbeche/diabetic-retinopathy-paraguay-resnet50-advanced-20250619-042814
|
Mostefa-Terbeche
| 2025-08-19T12:47:08Z
| 0
| 0
| null |
[
"diabetic-retinopathy",
"medical-imaging",
"pytorch",
"computer-vision",
"retinal-imaging",
"dataset:paraguay",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-08-19T12:00:01Z
|
---
license: apache-2.0
tags:
- diabetic-retinopathy
- medical-imaging
- pytorch
- computer-vision
- retinal-imaging
datasets:
- paraguay
metrics:
- accuracy
- quadratic-kappa
- auc
model-index:
- name: paraguay_resnet50_advanced
results:
- task:
type: image-classification
name: Diabetic Retinopathy Classification
dataset:
type: paraguay
name: PARAGUAY
metrics:
- type: accuracy
value: 0.40789473684210525
- type: quadratic-kappa
value: 0.7969016266460108
---
# Diabetic Retinopathy Classification Model
## Model Description
This model is trained for diabetic retinopathy classification using the resnet50 architecture on the paraguay dataset with advanced preprocessing.
## Model Details
- **Architecture**: resnet50
- **Dataset**: paraguay
- **Preprocessing**: advanced
- **Training Date**: 20250619-042814
- **Task**: 5-class diabetic retinopathy grading (0-4)
- **Directory**: paraguay_resnet50_20250619-042814_new
## Performance
- **Test Accuracy**: 0.40789473684210525
- **Test Quadratic Kappa**: 0.7969016266460108
- **Validation Kappa**: 0.7969016266460108
## Usage
```python
import torch
from huggingface_hub import hf_hub_download
# Download model
model_path = hf_hub_download(
repo_id="your-username/diabetic-retinopathy-paraguay-resnet50-advanced",
filename="model_best.pt"
)
# Load model
model = torch.load(model_path, map_location='cpu')
```
## Classes
- 0: No DR (No diabetic retinopathy)
- 1: Mild DR (Mild non-proliferative diabetic retinopathy)
- 2: Moderate DR (Moderate non-proliferative diabetic retinopathy)
- 3: Severe DR (Severe non-proliferative diabetic retinopathy)
- 4: Proliferative DR (Proliferative diabetic retinopathy)
## Citation
If you use this model, please cite your research paper/thesis.
|
eason668/04a9f657-5b57-4e80-a9c4-cb286fc36f06
|
eason668
| 2025-08-19T12:46:05Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:EleutherAI/pythia-410m-deduped",
"base_model:finetune:EleutherAI/pythia-410m-deduped",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T12:31:28Z
|
---
base_model: EleutherAI/pythia-410m-deduped
library_name: transformers
model_name: 04a9f657-5b57-4e80-a9c4-cb286fc36f06
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for 04a9f657-5b57-4e80-a9c4-cb286fc36f06
This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="eason668/04a9f657-5b57-4e80-a9c4-cb286fc36f06", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sn99/Gradients-On-Demand/runs/14loh2az)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lguaman/MyGemmaNPC
|
lguaman
| 2025-08-19T12:41:59Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T21:26:49Z
|
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lguaman/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Jacksss123/net72_uid234
|
Jacksss123
| 2025-08-19T12:41:01Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-19T12:38:56Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jacksss123/net72_uid2
|
Jacksss123
| 2025-08-19T12:40:56Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-19T12:36:02Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755605670
|
quantumxnode
| 2025-08-19T12:40:07Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:40:04Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755607155
|
Dejiat
| 2025-08-19T12:39:57Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:39:52Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755605038
|
milliarderdol
| 2025-08-19T12:38:25Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:37:30Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Elsihj89/camila-keynnect
|
Elsihj89
| 2025-08-19T12:38:17Z
| 0
| 0
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T12:38:17Z
|
---
license: apache-2.0
---
|
chiniwini/davidmodel
|
chiniwini
| 2025-08-19T12:37:41Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-19T12:04:01Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Davidmodel
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/chiniwini/davidmodel/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('chiniwini/davidmodel', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/chiniwini/davidmodel/discussions) to add images that show off what youโve made with this LoRA.
|
kimxxxx/mistral_r32_a32_b8_gas2_lr5e-5_4500tk_2epoch_newdata
|
kimxxxx
| 2025-08-19T12:37:04Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T12:36:56Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755606945
|
Dejiat
| 2025-08-19T12:36:26Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:36:22Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755606723
|
Dejiat
| 2025-08-19T12:32:49Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:32:44Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
neko-llm/Qwen3-235B-test4
|
neko-llm
| 2025-08-19T12:32:22Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3-235B-A22B",
"base_model:finetune:Qwen/Qwen3-235B-A22B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T10:50:58Z
|
---
base_model: Qwen/Qwen3-235B-A22B
library_name: transformers
model_name: Qwen3-235B-test4
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen3-235B-test4
This model is a fine-tuned version of [Qwen/Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neko-llm/Qwen3-235B-test4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.54.1
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
jacopo-minniti/uats-value-model
|
jacopo-minniti
| 2025-08-19T12:31:01Z
| 28
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"token-classification",
"generated_from_trainer",
"trl",
"prm",
"arxiv:2211.14275",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-11T02:06:37Z
|
---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: Qwen2.5-1.5B-Reward-Math-Sheperd
tags:
- generated_from_trainer
- trl
- prm
licence: license
---
# Model Card for Qwen2.5-1.5B-Reward-Math-Sheperd
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/uncertainty-guided-reasoning/value-model/runs/ra2126bg)
This model was trained with PRM.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0.dev20250319+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite PRM as:
```bibtex
@article{uesato2022solving,
title = {{Solving Math Word Problems With Process- and Outcome-Based Feedback}},
author = {Uesato, Jonathan and Kushman, Nate and Kumar, Ramana and Song, Francis and Siegel, Noah and Wang, Lisa and Creswell, Antonia and Irving, Geoffrey and Higgins, Irina},
year = 2022,
journal = {arXiv preprint arXiv:2211.14275}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tensorblock/Menlo_Lucy-128k-GGUF
|
tensorblock
| 2025-08-19T12:30:43Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:Menlo/Lucy-128k",
"base_model:quantized:Menlo/Lucy-128k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T12:10:23Z
|
---
license: apache-2.0
language:
- en
base_model: Menlo/Lucy-128k
pipeline_tag: text-generation
library_name: transformers
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Menlo/Lucy-128k - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building โ
</a>
</div>
This repo contains GGUF format model files for [Menlo/Lucy-128k](https://huggingface.co/Menlo/Lucy-128k).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">๐ Try it now! ๐</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">๐ See what we built ๐</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">๐ See what we built ๐</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Lucy-128k-Q2_K.gguf](https://huggingface.co/tensorblock/Menlo_Lucy-128k-GGUF/blob/main/Lucy-128k-Q2_K.gguf) | Q2_K | 0.778 GB | smallest, significant quality loss - not recommended for most purposes |
| [Lucy-128k-Q3_K_S.gguf](https://huggingface.co/tensorblock/Menlo_Lucy-128k-GGUF/blob/main/Lucy-128k-Q3_K_S.gguf) | Q3_K_S | 0.867 GB | very small, high quality loss |
| [Lucy-128k-Q3_K_M.gguf](https://huggingface.co/tensorblock/Menlo_Lucy-128k-GGUF/blob/main/Lucy-128k-Q3_K_M.gguf) | Q3_K_M | 0.940 GB | very small, high quality loss |
| [Lucy-128k-Q3_K_L.gguf](https://huggingface.co/tensorblock/Menlo_Lucy-128k-GGUF/blob/main/Lucy-128k-Q3_K_L.gguf) | Q3_K_L | 1.003 GB | small, substantial quality loss |
| [Lucy-128k-Q4_0.gguf](https://huggingface.co/tensorblock/Menlo_Lucy-128k-GGUF/blob/main/Lucy-128k-Q4_0.gguf) | Q4_0 | 1.054 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Lucy-128k-Q4_K_S.gguf](https://huggingface.co/tensorblock/Menlo_Lucy-128k-GGUF/blob/main/Lucy-128k-Q4_K_S.gguf) | Q4_K_S | 1.060 GB | small, greater quality loss |
| [Lucy-128k-Q4_K_M.gguf](https://huggingface.co/tensorblock/Menlo_Lucy-128k-GGUF/blob/main/Lucy-128k-Q4_K_M.gguf) | Q4_K_M | 1.107 GB | medium, balanced quality - recommended |
| [Lucy-128k-Q5_0.gguf](https://huggingface.co/tensorblock/Menlo_Lucy-128k-GGUF/blob/main/Lucy-128k-Q5_0.gguf) | Q5_0 | 1.231 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Lucy-128k-Q5_K_S.gguf](https://huggingface.co/tensorblock/Menlo_Lucy-128k-GGUF/blob/main/Lucy-128k-Q5_K_S.gguf) | Q5_K_S | 1.231 GB | large, low quality loss - recommended |
| [Lucy-128k-Q5_K_M.gguf](https://huggingface.co/tensorblock/Menlo_Lucy-128k-GGUF/blob/main/Lucy-128k-Q5_K_M.gguf) | Q5_K_M | 1.258 GB | large, very low quality loss - recommended |
| [Lucy-128k-Q6_K.gguf](https://huggingface.co/tensorblock/Menlo_Lucy-128k-GGUF/blob/main/Lucy-128k-Q6_K.gguf) | Q6_K | 1.418 GB | very large, extremely low quality loss |
| [Lucy-128k-Q8_0.gguf](https://huggingface.co/tensorblock/Menlo_Lucy-128k-GGUF/blob/main/Lucy-128k-Q8_0.gguf) | Q8_0 | 1.834 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Menlo_Lucy-128k-GGUF --include "Lucy-128k-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Menlo_Lucy-128k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755606615
|
lilTAT
| 2025-08-19T12:30:42Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:30:38Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755606423
|
lqpl
| 2025-08-19T12:28:08Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:27:50Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kodetr/stunting-7B-Deepseek
|
kodetr
| 2025-08-19T12:27:25Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"stunting",
"kesehatan",
"anak",
"conversational",
"id",
"dataset:kodetr/penelitian-fundamental-stunting-qa",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T11:43:34Z
|
---
library_name: transformers
tags:
- stunting
- kesehatan
- anak
license: apache-2.0
datasets:
- kodetr/penelitian-fundamental-stunting-qa
language:
- id
metrics:
- rouge
- bleu
pipeline_tag: text-generation
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
---
### Model Description
<!-- Provide a longer summary of what this model is. -->
Konsultasi(Q&A) stunting pada anak
- **Developed by:** Tanwir
- **Language :** Indonesia
### Training

### Parameter
```
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151643,
"hidden_act": "silu",
"hidden_size": 3584,
"initializer_range": 0.02,
"intermediate_size": 18944,
"layer_types": [
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention"
],
"max_position_embeddings": 131072,
"max_window_layers": 28,
"model_type": "qwen2",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"rope_theta": 10000,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.55.0",
"use_cache": true,
"use_mrope": false,
"use_sliding_window": false,
"vocab_size": 152064
```
### Use with transformers
Pastikan untuk memperbarui instalasi transformer Anda melalui pip install --upgrade transformer.
```python
import torch
from transformers import pipeline
model_id = "kodetr/stunting-7B-Deepseek-R1"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "Jelaskan definisi 1000 hari pertama kehidupan."},
{"role": "user", "content": "Apa itu 1000 hari pertama kehidupan?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
|
koloni/blockassist-bc-deadly_graceful_stingray_1755604754
|
koloni
| 2025-08-19T12:27:01Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:26:57Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755605238
|
Sayemahsjn
| 2025-08-19T12:26:09Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:26:05Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
New-Clip-prabh-viral-video/New.full.videos.prabh.Viral.Video.Official.Tutorial
|
New-Clip-prabh-viral-video
| 2025-08-19T12:24:33Z
| 0
| 0
| null |
[
"region:us"
] | null | 2025-08-19T12:24:18Z
|
<a href="https://sdu.sk/AyL"><img src="https://files.qatarliving.com/event/2025/06/20/Jawan69_0-1749987397680.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐จ๐๐๐ฃ ๐ช๐ฅ ๐๐ฃ๐ ๐ฌ๐๐ฉ๐๐ ๐๐ช๐ก๐ก ๐ซ๐๐๐๐ค ๐๐ฟ)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค)</a>
|
Orginal-Uppal-Farm-Girl-Viral-Video-Link/New.full.videos.Uppal.Farm.Girl.Viral.Video.Official.Tutorial
|
Orginal-Uppal-Farm-Girl-Viral-Video-Link
| 2025-08-19T12:21:14Z
| 0
| 0
| null |
[
"region:us"
] | null | 2025-08-19T12:21:00Z
|
<a href="https://sdu.sk/AyL"><img src="https://files.qatarliving.com/event/2025/06/20/Jawan69_0-1749987397680.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐จ๐๐๐ฃ ๐ช๐ฅ ๐๐ฃ๐ ๐ฌ๐๐ฉ๐๐ ๐๐ช๐ก๐ก ๐ซ๐๐๐๐ค ๐๐ฟ)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค)</a>
|
VIDEOS-18-afreen-viral-Video-link/New.full.videos.afreen.Viral.Video.Official.Tutorial
|
VIDEOS-18-afreen-viral-Video-link
| 2025-08-19T12:20:11Z
| 0
| 0
| null |
[
"region:us"
] | null | 2025-08-19T12:19:57Z
|
<a href="https://sdu.sk/AyL"><img src="https://files.qatarliving.com/event/2025/06/20/Jawan69_0-1749987397680.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐จ๐๐๐ฃ ๐ช๐ฅ ๐๐ฃ๐ ๐ฌ๐๐ฉ๐๐ ๐๐ช๐ก๐ก ๐ซ๐๐๐๐ค ๐๐ฟ)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค)</a>
|
Marksdo/WhisperMate
|
Marksdo
| 2025-08-19T12:20:10Z
| 100
| 5
| null |
[
"gguf",
"region:us"
] | null | 2023-09-21T08:41:51Z
|
Macos native UI app for Whisper AI processing
https://whispermate.app





|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755605923
|
Dejiat
| 2025-08-19T12:19:32Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:19:22Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ishahaf/Llama-3.3-Nemotron-Super-49B-v1.5
|
ishahaf
| 2025-08-19T12:18:39Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"nemotron-nas",
"text-generation",
"nvidia",
"llama-3",
"pytorch",
"conversational",
"custom_code",
"en",
"arxiv:2411.19146",
"arxiv:2505.00949",
"arxiv:2502.00203",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-08-19T12:18:39Z
|
---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: >-
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- llama-3
- pytorch
---
# Llama-3.3-Nemotron-Super-49B-v1.5

## Model Overview
Llama-3.3-Nemotron-Super-49B-v1.5 is a significantly upgraded version of Llama-3.3-Nemotron-Super-49B-v1 and is a large language model (LLM) which is a derivative of Meta Llama-3.3-70B-Instruct (AKA the reference model). It is a reasoning model that is post trained for reasoning, human chat preferences, and agentic tasks, such as RAG and tool calling. The model supports a context length of 128K tokens.
Llama-3.3-Nemotron-Super-49B-v1.5 is a model which offers a great tradeoff between model accuracy and efficiency. Efficiency (throughput) directly translates to savings. Using a novel Neural Architecture Search (NAS) approach, we greatly reduce the modelโs memory footprint, enabling larger workloads, as well as fitting the model on a single GPU at high workloads (H200). This NAS approach enables the selection of a desired point in the accuracy-efficiency tradeoff. For more information on the NAS approach, please refer to [this paper](https://arxiv.org/abs/2411.19146)
The model underwent a multi-phase post-training process to enhance both its reasoning and non-reasoning capabilities. This includes a supervised fine-tuning stage for Math, Code, Science, and Tool Calling. Additionally, the model went through multiple stages of Reinforcement Learning (RL) including Reward-aware Preference Optimization (RPO) for chat, Reinforcement Learning with Verifiable Rewards (RLVR) for reasoning, and iterative Direct Preference Optimization (DPO) for Tool Calling capability enhancements. The final checkpoint was achieved after merging several RL and DPO checkpoints.
This model is part of the Llama Nemotron Collection. You can find the other model(s) in this family here:
- [Llama-3.1-Nemotron-Nano-4B-v1.1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1)
- [Llama-3.1-Nemotron-Ultra-253B-v1](https://huggingface.co/nvidia/Llama-3_1-Nemotron-Ultra-253B-v1)
This model is ready for commercial use.
## License/Terms of Use
GOVERNING TERMS: Your use of this model is governed by the [NVIDIA Open Model License.](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) Additional Information: [Llama 3.3 Community License Agreement](https://www.llama.com/llama3_3/license/). Built with Llama.
**Model Developer:** NVIDIA
**Model Dates:** Trained between November 2024 and July 2025
**Data Freshness:** The pretraining data has a cutoff of 2023 per Meta Llama 3.3 70B
## Deployment Geography
Global
### Use Case: <br>
Developers designing AI Agent systems, chatbots, RAG systems, and other AI-powered applications. Also suitable for typical instruction-following tasks. <br>
### Release Date: <br>
- Hugging Face 7/25/2025 via [Llama-3_3-Nemotron-Super-49B-v1_5](https://huggingface.co/nvidia/Llama-3_3-Nemotron-Super-49B-v1_5)
- build.nvidia.com 7/25/2025 [Llama-3_3-Nemotron-Super-49B-v1_5](https://build.nvidia.com/nvidia/llama-3_3-nemotron-super-49b-v1_5)
## References
* [\[2505.00949\] Llama-Nemotron: Efficient Reasoning Models](https://arxiv.org/abs/2505.00949)
* [\[2502.00203\] Reward-aware Preference Optimization: A Unified Mathematical Framework for Model Alignment](https://arxiv.org/abs/2502.00203)
* [\[2411.19146\]Puzzle: Distillation-Based NAS for Inference-Optimized LLMs](https://arxiv.org/abs/2411.19146)
## Model Architecture
**Architecture Type:** Dense decoder-only Transformer model
**Network Architecture:** Llama 3.3 70B Instruct, customized through Neural Architecture Search (NAS)
The model is a derivative of Metaโs Llama-3.3-70B-Instruct, using Neural Architecture Search (NAS). The NAS algorithm results in non-standard and non-repetitive blocks. This includes the following:
Skip attention: In some blocks, the attention is skipped entirely, or replaced with a single linear layer.
Variable FFN: The expansion/compression ratio in the FFN layer is different between blocks.
We utilize a block-wise distillation of the reference model, where for each block we create multiple variants providing different tradeoffs of quality vs. computational complexity, discussed in more depth below. We then search over the blocks to create a model which meets the required throughput and memory (optimized for a single H100-80GB GPU) while minimizing the quality degradation. The model then undergoes knowledge distillation (KD), with a focus on English single and multi-turn chat use-cases. The KD step included 40 billion tokens consisting of a mixture of 3 datasets - FineWeb, Buzz-V1.2 and Dolma.
## Intended use
Llama-3.3-Nemotron-Super-49B-v1.5 is a general purpose reasoning and chat model intended to be used in English and coding languages. Other non-English languages (German, French, Italian, Portuguese, Hindi, Spanish, and Thai) are also supported.
## Input
- **Input Type:** Text
- **Input Format:** String
- **Input Parameters:** One-Dimensional (1D)
- **Other Properties Related to Input:** Context length up to 131,072 tokens
## Output
- **Output Type:** Text
- **Output Format:** String
- **Output Parameters:** One-Dimensional (1D)
- **Other Properties Related to Output:** Context length up to 131,072 tokens
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIAโs hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
## Model Version
1.5 (07/25/2025)
## Software Integration
- **Runtime Engine:** Transformers
- **Recommended Hardware Microarchitecture Compatibility:**
- NVIDIA Ampere
- NVIDIA Hopper
- **Preferred Operating System(s):** Linux
## Quick Start and Usage Recommendations:
1. By default (empty system prompt) the model will respond in reasoning ON mode. Setting `/no_think` in the system prompt will enable reasoning OFF mode.
2. We recommend setting temperature to `0.6`, and Top P to `0.95` for Reasoning ON mode
3. We recommend using greedy decoding for Reasoning OFF mode
You can try this model out through the preview API, using this link: [Llama-3_3-Nemotron-Super-49B-v1_5](https://build.nvidia.com/nvidia/llama-3_3-nemotron-super-49b-v1_5).
## Use It with vLLM
```pip install vllm==0.9.2```
An example on how to serve with vLLM:
```console
$ python3 -m vllm.entrypoints.openai.api_server \
--model "nvidia/Llama-3_3-Nemotron-Super-49B-v1_5" \
--trust-remote-code \
--seed=1 \
--host="0.0.0.0" \
--port=5000 \
--served-model-name "Llama-3_3-Nemotron-Super-49B-v1_5" \
--tensor-parallel-size=8 \
--max-model-len=65536 \
--gpu-memory-utilization 0.95 \
--enforce-eager
```
### Running a vLLM Server with Tool-call Support
To enable tool calling usage with this model, we provide a tool parser in the repository. Here is an example on how to use it:
```console
$ git clone https://huggingface.co/nvidia/Llama-3_3-Nemotron-Super-49B-v1_5
$ conda create -n vllm python=3.12 -y
$ conda activate vllm
$ pip install vllm==0.9.2
$ python3 -m vllm.entrypoints.openai.api_server \
--model Llama-3_3-Nemotron-Super-49B-v1_5 \
--trust-remote-code \
--seed=1 \
--host="0.0.0.0" \
--port=5000 \
--served-model-name "Llama-3_3-Nemotron-Super-49B-v1_5" \
--tensor-parallel-size=8 \
--max-model-len=65536 \
--gpu-memory-utilization 0.95 \
--enforce-eager \
--enable-auto-tool-choice \
--tool-parser-plugin "Llama-3_3-Nemotron-Super-49B-v1_5/llama_nemotron_toolcall_parser_no_streaming.py" \
--tool-call-parser "llama_nemotron_json"
```
After launching a vLLM server, you can call the server with tool-call support using a Python script like below.
```python
from openai import OpenAI
client = OpenAI(
base_url="http://0.0.0.0:5000/v1",
api_key="dummy",
)
completion = client.chat.completions.create(
model="Llama-3_3-Nemotron-Super-49B-v1_5",
messages=[
{"role": "system", "content": ""},
{"role": "user", "content": "My bill is $100. What will be the amount for 18% tip?"}
],
tools=[
{
"type": "function",
"function": {
"name": "calculate_tip",
"parameters": {
"type": "object",
"properties": {
"bill_total": {
"type": "integer",
"description": "The total amount of the bill"
},
"tip_percentage": {
"type": "integer",
"description": "The percentage of tip to be applied"
}
},
"required": ["bill_total", "tip_percentage"]
}
}
},
{
"type": "function",
"function": {
"name": "convert_currency",
"parameters": {
"type": "object",
"properties": {
"amount": {
"type": "integer",
"description": "The amount to be converted"
},
"from_currency": {
"type": "string",
"description": "The currency code to convert from"
},
"to_currency": {
"type": "string",
"description": "The currency code to convert to"
}
},
"required": ["from_currency", "amount", "to_currency"]
}
}
}
],
temperature=0.6,
top_p=0.95,
max_tokens=32768,
stream=False
)
print(completion.choices[0].message.content)
'''
<think>
Okay, let's see. The user has a bill of $100 and wants to know the amount for an 18% tip. Hmm, I need to calculate the tip based on the bill total and the percentage. The tools provided include calculate_tip, which takes bill_total and tip_percentage as parameters. So the bill_total here is 100, and the tip_percentage is 18. I should call the calculate_tip function with these values. Wait, do I need to check if the parameters are integers? The bill is $100, which is an integer, and 18% is also an integer. So that fits the function's requirements. I don't need to convert any currency here because the user is asking about a tip in the same currency. So the correct tool to use is calculate_tip with those parameters.
</think>
'''
print(completion.choices[0].message.tool_calls)
'''
[ChatCompletionMessageToolCall(id='chatcmpl-tool-e341c6954d2c48c2a0e9071c7bdefd8b', function=Function(arguments='{"bill_total": 100, "tip_percentage": 18}', name='calculate_tip'), type='function')]
'''
```
## Training and Evaluation Datasets
## Training Datasets
A large variety of training data was used for the knowledge distillation phase before post-training pipeline, 3 of which included: FineWeb, Buzz-V1.2, and Dolma.
The data for the multi-stage post-training phases for improvements in Code, Math, and Reasoning is a compilation of SFT and RL data that supports improvements of math, code, general reasoning, and instruction following capabilities of the original Llama instruct model.
Prompts have been sourced from either public and open corpus or synthetically generated. Responses were synthetically generated by a variety of models, with some prompts containing responses for both reasoning on and off modes, to train the model to distinguish between two modes.
We have released our [Nemotron-Post-Training-Dataset-v1](https://huggingface.co/datasets/nvidia/Nemotron-Post-Training-Dataset-v1) to promote openness and transparency in model development and improvement.
**Data Collection for Training Datasets:**
Hybrid: Automated, Human, Synthetic
**Data Labeling for Training Datasets:**
Hybrid: Automated, Human, Synthetic
## Evaluation Datasets
We used the datasets listed below to evaluate Llama-3.3-Nemotron-Super-49B-v1.5.
Data Collection for Evaluation Datasets:
- Hybrid: Human. Synthetic
Data Labeling for Evaluation Datasets:
- Hybrid: Human, Synthetic, Automatic
## Evaluation Results
We evaluate the model using temperature=`0.6`, top_p=`0.95`, and 64k sequence length. We run the benchmarks up to 16 times and average the scores to be more accurate.
### MATH500
| Reasoning Mode | pass@1 (avg. over 4 runs) |
|--------------|------------|
| Reasoning On | 97.4 |
### AIME 2024
| Reasoning Mode | pass@1 (avg. over 16 runs) |
|--------------|------------|
| Reasoning On | 87.5 |
### AIME 2025
| Reasoning Mode | pass@1 (avg. over 16 runs) |
|--------------|------------|
| Reasoning On | 82.71 |
### GPQA
| Reasoning Mode | pass@1 (avg. over 4 runs) |
|--------------|------------|
| Reasoning On | 71.97 |
### LiveCodeBench 24.10-25.02
| Reasoning Mode | pass@1 (avg. over 4 runs) |
|--------------|------------|
| Reasoning On | 73.58 |
### BFCL v3
| Reasoning Mode | pass@1 (avg. over 2 runs) |
|--------------|------------|
| Reasoning On | 71.75 |
### IFEval
| Reasoning Mode | Strict:Instruction |
|--------------|------------|
| Reasoning On | 88.61 |
### ArenaHard
| Reasoning Mode | pass@1 (avg. over 1 runs) |
|--------------|------------|
| Reasoning On | 92.0 |
### Humanity's Last Exam (Text-Only Subset)
| Reasoning Mode | pass@1 (avg. over 1 runs) |
|--------------|------------|
| Reasoning On | 7.64 |
### MMLU Pro (CoT)
| Reasoning Mode | pass@1 (avg. over 1 runs) |
|--------------|------------|
| Reasoning On | 79.53 |
All evaluations were done using the [NeMo-Skills](https://github.com/NVIDIA/NeMo-Skills) repository.
## Inference:
**Engine:**
- Transformers
**Test Hardware:**
- 2x NVIDIA H100-80GB
- 2x NVIDIA A100-80GB GPUs
## Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](./EXPLAINABILITY.md), [Bias](./BIAS.md), [Safety & Security](./SAFETY&SECURITY.md), and [Privacy](./PRIVACY.md) Subcards.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
## Citation
```
@misc{bercovich2025llamanemotronefficientreasoningmodels,
title={Llama-Nemotron: Efficient Reasoning Models},
author={Akhiad Bercovich and Itay Levy and Izik Golan and Mohammad Dabbah and Ran El-Yaniv and Omri Puny and Ido Galil and Zach Moshe and Tomer Ronen and Najeeb Nabwani and Ido Shahaf and Oren Tropp and Ehud Karpas and Ran Zilberstein and Jiaqi Zeng and Soumye Singhal and Alexander Bukharin and Yian Zhang and Tugrul Konuk and Gerald Shen and Ameya Sunil Mahabaleshwarkar and Bilal Kartal and Yoshi Suhara and Olivier Delalleau and Zijia Chen and Zhilin Wang and David Mosallanezhad and Adi Renduchintala and Haifeng Qian and Dima Rekesh and Fei Jia and Somshubra Majumdar and Vahid Noroozi and Wasi Uddin Ahmad and Sean Narenthiran and Aleksander Ficek and Mehrzad Samadi and Jocelyn Huang and Siddhartha Jain and Igor Gitman and Ivan Moshkov and Wei Du and Shubham Toshniwal and George Armstrong and Branislav Kisacanin and Matvei Novikov and Daria Gitman and Evelina Bakhturina and Jane Polak Scowcroft and John Kamalu and Dan Su and Kezhi Kong and Markus Kliegl and Rabeeh Karimi and Ying Lin and Sanjeev Satheesh and Jupinder Parmar and Pritam Gundecha and Brandon Norick and Joseph Jennings and Shrimai Prabhumoye and Syeda Nahida Akter and Mostofa Patwary and Abhinav Khattar and Deepak Narayanan and Roger Waleffe and Jimmy Zhang and Bor-Yiing Su and Guyue Huang and Terry Kong and Parth Chadha and Sahil Jain and Christine Harvey and Elad Segal and Jining Huang and Sergey Kashirsky and Robert McQueen and Izzy Putterman and George Lam and Arun Venkatesan and Sherry Wu and Vinh Nguyen and Manoj Kilaru and Andrew Wang and Anna Warno and Abhilash Somasamudramath and Sandip Bhaskar and Maka Dong and Nave Assaf and Shahar Mor and Omer Ullman Argov and Scot Junkin and Oleksandr Romanenko and Pedro Larroy and Monika Katariya and Marco Rovinelli and Viji Balas and Nicholas Edelman and Anahita Bhiwandiwalla and Muthu Subramaniam and Smita Ithape and Karthik Ramamoorthy and Yuting Wu and Suguna Varshini Velury and Omri Almog and Joyjit Daw and Denys Fridman and Erick Galinkin and Michael Evans and Katherine Luna and Leon Derczynski and Nikki Pope and Eileen Long and Seth Schneider and Guillermo Siman and Tomasz Grzegorzek and Pablo Ribalta and Monika Katariya and Joey Conway and Trisha Saar and Ann Guan and Krzysztof Pawelec and Shyamala Prayaga and Oleksii Kuchaiev and Boris Ginsburg and Oluwatobi Olabiyi and Kari Briski and Jonathan Cohen and Bryan Catanzaro and Jonah Alben and Yonatan Geifman and Eric Chung and Chris Alexiuk},
year={2025},
eprint={2505.00949},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.00949},
}
```
|
xumingtensor/affine-7060819
|
xumingtensor
| 2025-08-19T12:18:34Z
| 0
| 0
|
vllm
|
[
"vllm",
"safetensors",
"mistral3",
"image-text-to-text",
"conversational",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"base_model:finetune:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"license:apache-2.0",
"region:us"
] |
image-text-to-text
| 2025-08-19T11:13:23Z
|
---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: vllm
inference: false
base_model:
- mistralai/Mistral-Small-3.2-24B-Instruct-2506
pipeline_tag: image-text-to-text
---
# Mistral-Small-3.2-24B-Instruct-2506
Mistral-Small-3.2-24B-Instruct-2506 is a minor update of [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503).
Small-3.2 improves in the following categories:
- **Instruction following**: Small-3.2 is better at following precise instructions
- **Repetition errors**: Small-3.2 produces less infinite generations or repetitive answers
- **Function calling**: Small-3.2's function calling template is more robust (see [here](https://github.com/mistralai/mistral-common/blob/535b4d0a0fc94674ea17db6cf8dc2079b81cbcfa/src/mistral_common/tokens/tokenizers/instruct.py#L778) and [examples](#function-calling))
In all other categories Small-3.2 should match or slightly improve compared to [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503).
## Key Features
- same as [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503#key-features)
## Benchmark Results
We compare Mistral-Small-3.2-24B to [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503).
For more comparison against other models of similar size, please check [Mistral-Small-3.1's Benchmarks'](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503#benchmark-results)
### Text
#### Instruction Following / Chat / Tone
| Model | Wildbench v2 | Arena Hard v2 | IF (Internal; accuracy) |
|-------|---------------|---------------|------------------------|
| Small 3.1 24B Instruct | 55.6% | 19.56% | 82.75% |
| **Small 3.2 24B Instruct** | **65.33%** | **43.1%** | **84.78%** |
#### Infinite Generations
Small 3.2 reduces infitine generations by 2x on challenging, long and repetitive prompts.
| Model | Infinite Generations (Internal; Lower is better) |
|-------|-------|
| Small 3.1 24B Instruct | 2.11% |
| **Small 3.2 24B Instruct** | **1.29%** |
#### STEM
| Model | MMLU | MMLU Pro (5-shot CoT) | MATH | GPQA Main (5-shot CoT) | GPQA Diamond (5-shot CoT )| MBPP Plus - Pass@5 | HumanEval Plus - Pass@5 | SimpleQA (TotalAcc)|
|--------------------------------|-----------|-----------------------|------------------------|------------------------|---------------------------|--------------------|-------------------------|--------------------|
| Small 3.1 24B Instruct | 80.62% | 66.76% | 69.30% | 44.42% | 45.96% | 74.63% | 88.99% | 10.43% |
| **Small 3.2 24B Instruct** | 80.50% | **69.06%** | 69.42% | 44.22% | 46.13% | **78.33%** | **92.90%** | **12.10%** |
### Vision
| Model | MMMU | Mathvista | ChartQA | DocVQA | AI2D |
|--------------------------------|------------|-----------|-----------|-----------|-----------|
| Small 3.1 24B Instruct | **64.00%** | **68.91%**| 86.24% | 94.08% | 93.72% |
| **Small 3.2 24B Instruct** | 62.50% | 67.09% | **87.4%** | 94.86% | 92.91% |
## Usage
The model can be used with the following frameworks;
- [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended)
- [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
**Note 1**: We recommend using a relatively low temperature, such as `temperature=0.15`.
**Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend to use the one provided in the [SYSTEM_PROMPT.txt](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506/blob/main/SYSTEM_PROMPT.txt) file.
### vLLM (recommended)
We recommend using this model with [vLLM](https://github.com/vllm-project/vllm).
#### Installation
Make sure to install [`vLLM >= 0.9.1`](https://github.com/vllm-project/vllm/releases/tag/v0.9.1):
```
pip install vllm --upgrade
```
Doing so should automatically install [`mistral_common >= 1.6.2`](https://github.com/mistralai/mistral-common/releases/tag/v1.6.2).
To check:
```
python -c "import mistral_common; print(mistral_common.__version__)"
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
#### Serve
We recommand that you use Mistral-Small-3.2-24B-Instruct-2506 in a server/client setting.
1. Spin up a server:
```
vllm serve mistralai/Mistral-Small-3.2-24B-Instruct-2506 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --limit_mm_per_prompt 'image=10' --tensor-parallel-size 2
```
**Note:** Running Mistral-Small-3.2-24B-Instruct-2506 on GPU requires ~55 GB of GPU RAM in bf16 or fp16.
2. To ping the client you can use a simple Python snippet. See the following examples.
#### Vision reasoning
Take leverage of the vision capabilities of Mistral-Small-3.2-24B-Instruct-2506 to take the best choice given a scenario, go catch them all !
<details>
<summary>Python snippet</summary>
```py
from datetime import datetime, timedelta
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.15
MAX_TOK = 131072
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
today = datetime.today().strftime("%Y-%m-%d")
yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
model_name = repo_id.split("/")[-1]
return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438"
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": [
{
"type": "text",
"text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.",
},
{"type": "image_url", "image_url": {"url": image_url}},
],
},
]
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=TEMP,
max_tokens=MAX_TOK,
)
print(response.choices[0].message.content)
# In this situation, you are playing a Pokรฉmon game where your Pikachu (Level 42) is facing a wild Pidgey (Level 17). Here are the possible actions you can take and an analysis of each:
# 1. **FIGHT**:
# - **Pros**: Pikachu is significantly higher level than the wild Pidgey, which suggests that it should be able to defeat Pidgey easily. This could be a good opportunity to gain experience points and possibly items or money.
# - **Cons**: There is always a small risk of Pikachu fainting, especially if Pidgey has a powerful move or a status effect that could hinder Pikachu. However, given the large level difference, this risk is minimal.
# 2. **BAG**:
# - **Pros**: You might have items in your bag that could help in this battle, such as Potions, Pokรฉ Balls, or Berries. Using an item could help you capture the Pidgey or heal your Pikachu if needed.
# - **Cons**: Using items might not be necessary given the level difference. It could be more efficient to just fight and defeat the Pidgey quickly.
# 3. **POKรMON**:
# - **Pros**: You might have another Pokรฉmon in your party that is better suited for this battle or that you want to gain experience. Switching Pokรฉmon could also be a strategic move if you want to train a lower-level Pokรฉmon.
# - **Cons**: Switching Pokรฉmon might not be necessary since Pikachu is at a significant advantage. It could also waste time and potentially give Pidgey a turn to attack.
# 4. **RUN**:
# - **Pros**: Running away could save time and conserve your Pokรฉmon's health and resources. If you are in a hurry or do not need the experience or items, running away is a safe option.
# - **Cons**: Running away means you miss out on the experience points and potential items or money that you could gain from defeating the Pidgey. It also means you do not get the chance to capture the Pidgey if you wanted to.
# ### Recommendation:
# Given the significant level advantage, the best action is likely to **FIGHT**. This will allow you to quickly defeat the Pidgey, gain experience points, and potentially earn items or money. If you are concerned about Pikachu's health, you could use an item from your **BAG** to heal it before or during the battle. Running away or switching Pokรฉmon does not seem necessary in this situation.
```
</details>
#### Function calling
Mistral-Small-3.2-24B-Instruct-2506 is excellent at function / tool calling tasks via vLLM. *E.g.:*
<details>
<summary>Python snippet - easy</summary>
```py
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.15
MAX_TOK = 131072
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
image_url = "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/europe.png"
tools = [
{
"type": "function",
"function": {
"name": "get_current_population",
"description": "Get the up-to-date population of a given country.",
"parameters": {
"type": "object",
"properties": {
"country": {
"type": "string",
"description": "The country to find the population of.",
},
"unit": {
"type": "string",
"description": "The unit for the population.",
"enum": ["millions", "thousands"],
},
},
"required": ["country", "unit"],
},
},
},
{
"type": "function",
"function": {
"name": "rewrite",
"description": "Rewrite a given text for improved clarity",
"parameters": {
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "The input text to rewrite",
}
},
},
},
},
]
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.",
},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "bbc5b7ede",
"type": "function",
"function": {
"name": "rewrite",
"arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}',
},
}
],
},
{
"role": "tool",
"content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}',
"tool_call_id": "bbc5b7ede",
"name": "rewrite",
},
{
"role": "assistant",
"content": "---\n\nOpenAI is a FOR-profit company.",
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Can you tell me what is the biggest country depicted on the map?",
},
{
"type": "image_url",
"image_url": {
"url": image_url,
},
},
],
}
]
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=TEMP,
max_tokens=MAX_TOK,
tools=tools,
tool_choice="auto",
)
assistant_message = response.choices[0].message.content
print(assistant_message)
# The biggest country depicted on the map is Russia.
messages.extend([
{"role": "assistant", "content": assistant_message},
{"role": "user", "content": "What is the population of that country in millions?"},
])
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=TEMP,
max_tokens=MAX_TOK,
tools=tools,
tool_choice="auto",
)
print(response.choices[0].message.tool_calls)
# [ChatCompletionMessageToolCall(id='3e92V6Vfo', function=Function(arguments='{"country": "Russia", "unit": "millions"}', name='get_current_population'), type='function')]
```
</details>
<details>
<summary>Python snippet - complex</summary>
```python
import json
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.15
MAX_TOK = 131072
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
image_url = "https://math-coaching.com/img/fiche/46/expressions-mathematiques.jpg"
def my_calculator(expression: str) -> str:
return str(eval(expression))
tools = [
{
"type": "function",
"function": {
"name": "my_calculator",
"description": "A calculator that can evaluate a mathematical expression.",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "The mathematical expression to evaluate.",
},
},
"required": ["expression"],
},
},
},
{
"type": "function",
"function": {
"name": "rewrite",
"description": "Rewrite a given text for improved clarity",
"parameters": {
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "The input text to rewrite",
}
},
},
},
},
]
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Can you calculate the results for all the equations displayed in the image? Only compute the ones that involve numbers.",
},
{
"type": "image_url",
"image_url": {
"url": image_url,
},
},
],
},
]
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=TEMP,
max_tokens=MAX_TOK,
tools=tools,
tool_choice="auto",
)
tool_calls = response.choices[0].message.tool_calls
print(tool_calls)
# [ChatCompletionMessageToolCall(id='CyQBSAtGh', function=Function(arguments='{"expression": "6 + 2 * 3"}', name='my_calculator'), type='function'), ChatCompletionMessageToolCall(id='KQqRCqvzc', function=Function(arguments='{"expression": "19 - (8 + 2) + 1"}', name='my_calculator'), type='function')]
results = []
for tool_call in tool_calls:
function_name = tool_call.function.name
function_args = tool_call.function.arguments
if function_name == "my_calculator":
result = my_calculator(**json.loads(function_args))
results.append(result)
messages.append({"role": "assistant", "tool_calls": tool_calls})
for tool_call, result in zip(tool_calls, results):
messages.append(
{
"role": "tool",
"tool_call_id": tool_call.id,
"name": tool_call.function.name,
"content": result,
}
)
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=TEMP,
max_tokens=MAX_TOK,
)
print(response.choices[0].message.content)
# Here are the results for the equations that involve numbers:
# 1. \( 6 + 2 \times 3 = 12 \)
# 3. \( 19 - (8 + 2) + 1 = 10 \)
# For the other equations, you need to substitute the variables with specific values to compute the results.
```
</details>
#### Instruction following
Mistral-Small-3.2-24B-Instruct-2506 will follow your instructions down to the last letter !
<details>
<summary>Python snippet</summary>
```python
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.15
MAX_TOK = 131072
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": "Write me a sentence where every word starts with the next letter in the alphabet - start with 'a' and end with 'z'.",
},
]
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=TEMP,
max_tokens=MAX_TOK,
)
assistant_message = response.choices[0].message.content
print(assistant_message)
# Here's a sentence where each word starts with the next letter of the alphabet, starting from 'a' and ending with 'z':
# "Always brave cats dance elegantly, fluffy giraffes happily ignore jungle kites, lovingly munching nuts, observing playful quails racing swiftly, tiny unicorns vaulting while xylophones yodel zealously."
# This sentence follows the sequence from A to Z without skipping any letters.
```
</details>
### Transformers
You can also use Mistral-Small-3.2-24B-Instruct-2506 with `Transformers` !
To make the best use of our model with `Transformers` make sure to have [installed](https://github.com/mistralai/mistral-common) `mistral-common >= 1.6.2` to use our tokenizer.
```bash
pip install mistral-common --upgrade
```
Then load our tokenizer along with the model and generate:
<details>
<summary>Python snippet</summary>
```python
from datetime import datetime, timedelta
import torch
from mistral_common.protocol.instruct.request import ChatCompletionRequest
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from huggingface_hub import hf_hub_download
from transformers import Mistral3ForConditionalGeneration
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
today = datetime.today().strftime("%Y-%m-%d")
yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
model_name = repo_id.split("/")[-1]
return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
tokenizer = MistralTokenizer.from_hf_hub(model_id)
model = Mistral3ForConditionalGeneration.from_pretrained(
model_id, torch_dtype=torch.bfloat16
)
image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438"
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": [
{
"type": "text",
"text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.",
},
{"type": "image_url", "image_url": {"url": image_url}},
],
},
]
tokenized = tokenizer.encode_chat_completion(ChatCompletionRequest(messages=messages))
input_ids = torch.tensor([tokenized.tokens])
attention_mask = torch.ones_like(input_ids)
pixel_values = torch.tensor(tokenized.images[0], dtype=torch.bfloat16).unsqueeze(0)
image_sizes = torch.tensor([pixel_values.shape[-2:]])
output = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
pixel_values=pixel_values,
image_sizes=image_sizes,
max_new_tokens=1000,
)[0]
decoded_output = tokenizer.decode(output[len(tokenized.tokens) :])
print(decoded_output)
# In this situation, you are playing a Pokรฉmon game where your Pikachu (Level 42) is facing a wild Pidgey (Level 17). Here are the possible actions you can take and an analysis of each:
# 1. **FIGHT**:
# - **Pros**: Pikachu is significantly higher level than the wild Pidgey, which suggests that it should be able to defeat Pidgey easily. This could be a good opportunity to gain experience points and possibly items or money.
# - **Cons**: There is always a small risk of Pikachu fainting, especially if Pidgey has a powerful move or a status effect that could hinder Pikachu. However, given the large level difference, this risk is minimal.
# 2. **BAG**:
# - **Pros**: You might have items in your bag that could help in this battle, such as Potions, Pokรฉ Balls, or Berries. Using an item could help you capture Pidgey or heal Pikachu if needed.
# - **Cons**: Using items might not be necessary given the level difference. It could be more efficient to just fight and defeat Pidgey quickly.
# 3. **POKรMON**:
# - **Pros**: You might have another Pokรฉmon in your party that is better suited for this battle or that you want to gain experience. Switching Pokรฉmon could also be strategic if you want to train a lower-level Pokรฉmon.
# - **Cons**: Switching Pokรฉmon might not be necessary since Pikachu is at a significant advantage. It could also waste time and potentially give Pidgey a turn to attack.
# 4. **RUN**:
# - **Pros**: Running away could be a quick way to avoid the battle altogether. This might be useful if you are trying to conserve resources or if you are in a hurry to get to another location.
# - **Cons**: Running away means you miss out on the experience points, items, or money that you could gain from defeating Pidgey. It also might not be the most efficient use of your time if you are trying to train your Pokรฉmon.
# ### Recommendation:
# Given the significant level advantage, the best action to take is likely **FIGHT**. This will allow you to quickly defeat Pidgey and gain experience points for Pikachu. If you are concerned about Pikachu's health, you could use the **BAG** to heal Pikachu before or during the battle. Running away or switching Pokรฉmon does not seem necessary in this situation.
```
</details>
|
java2core/gemma-3-4b-product-description
|
java2core
| 2025-08-19T12:18:29Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T10:21:15Z
|
---
base_model: google/gemma-3-4b-pt
library_name: transformers
model_name: gemma-3-4b-product-description
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-3-4b-product-description
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="java2core/gemma-3-4b-product-description", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.3.2
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
katreiaht/speecht5_finetuned_emirhan_tr
|
katreiaht
| 2025-08-19T12:16:30Z
| 15
| 0
| null |
[
"pytorch",
"tensorboard",
"speecht5",
"generated_from_trainer",
"license:mit",
"region:us"
] | null | 2025-08-12T14:23:53Z
|
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_emirhan_tr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_emirhan_tr
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.502 | 0.03 | 100 | 0.4198 |
| 0.4211 | 0.06 | 200 | 0.3732 |
| 0.3771 | 0.09 | 300 | 0.3491 |
| 0.3611 | 0.12 | 400 | 0.3298 |
| 0.3528 | 0.14 | 500 | 0.3238 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.6.0+cu124
- Datasets 2.19.1
- Tokenizers 0.13.3
|
unitova/blockassist-bc-zealous_sneaky_raven_1755604158
|
unitova
| 2025-08-19T12:15:58Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:15:54Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Tensavitprice/TensavitMexico
|
Tensavitprice
| 2025-08-19T12:14:56Z
| 0
| 0
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T12:14:04Z
|
---
license: apache-2.0
---
ยฟQuรฉ es Tensavit y cรณmo funciona?
Tensavit cรกpsula es una cรกpsula para la hipertensiรณn especialmente formulada, diseรฑada para ayudar a controlar la presiรณn arterial alta de forma natural. Actรบa favoreciendo una circulaciรณn saludable, reduciendo la presiรณn arterial y ayudando al corazรณn a funcionar de forma mรกs eficiente. La cรกpsula promueve el equilibrio del sistema cardiovascular, ayudando al cuerpo a mantener niveles estables de presiรณn arterial. Al mejorar el flujo sanguรญneo y la eficiencia cardรญaca general, reduce la fatiga y el estrรฉs relacionados con la hipertensiรณn. En resumen, Tensavit Pastillas ofrece una forma segura, natural y eficaz de apoyar la salud cardรญaca y mantener una presiรณn arterial normal Tensavit costo.
Sitio web oficial:<a href="https://www.nutritionsee.com/tensaviexico">www.Tensavit.com</a>
<p><a href="https://www.nutritionsee.com/tensaviexico"> <img src="https://www.nutritionsee.com/wp-content/uploads/2025/07/Tensavit-mexico.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/tensaviexico">ยกCompra ya! Haz clic en el enlace de abajo para mรกs informaciรณn y obtรฉn un 50% de descuento. ยกDate prisa!</a>
Sitio web oficial:<a href="https://www.nutritionsee.com/tensaviexico">www.Tensavit.com</a>
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755603953
|
hakimjustbao
| 2025-08-19T12:14:41Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:14:38Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BjarneNPO/finetune_19_08_2025_12_04_35
|
BjarneNPO
| 2025-08-19T12:14:33Z
| 0
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:19964",
"loss:MultipleNegativesRankingLoss",
"dataset:NPOA/Bjarne-Bachelorarbeit",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-19T12:14:15Z
|
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:19964
- loss:MultipleNegativesRankingLoss
base_model: FacebookAI/xlm-roberta-large
widget:
- source_sentence: bei einem kann keine hinterlegt werden
sentences:
- An einem Tag gab es im August eine รberbelegung, einmal erklรคrt wie sie diese
nachvollziehen kann.
- Fehlermeldung weist auf eine fehlende BI hin. Anwenderin stimmt sich dazu mit
ab.
- "Ticket\r\n---------------------------\r\nExport angepasst - informiert\r\n--------------------------\r\
\nUser mรถchte auch in der รผbergreifenden Personalliste die Anpassung umgesetzt\
\ haben - daher Ticket erneut geรถffnet\r\n- รผbergreifender Export ebenfalls angepasst\
\ - informiert"
- source_sentence: Userin darf erst am 01.02.2024 die Vertragsangebote rausschicken,
mรถchte aber schonmal vermerken, welchen Kindern sie ein Vertragsangebot schicken
mรถchte.
sentences:
- Das ist noch nicht freigeschaltet. Genauer Zeitpunkt steht auch noch nicht fest.
- "Kind muss manuell angelegt werden und dann neu synchronisiert und Anmeldedaten\
\ zusammenfรผhren.\r\nDa Userin weiterhin Anmeldedaten nicht zusammenfรผhren kann\
\ Userin gebeten uns einen Screenshot aus dem Kita-Navigator zukommen zu lassen.\r\
\nBeide Kinder wurden nun รผbertragen und befinden sich unter Vetragsangeboten."
- Kann die Kinder auf die Planungsliste nehmen, dann sieht sie diese sowohl in der
Planungsliste, als auch in der Liste der Anmeldungen mit dem Symbol in der Anmeldeliste.
- source_sentence: Fehlermeldung beim Erstellen der Datei.
sentences:
- In der Benutzerverwaltung unter Verwaltung.
- Bei einer Kollegin musste noch die Stundenanzahl unter Ausbildung und Statistik
eingetragen werden.
- "Wurde an den Entwickler weitergegeben.\r\nProblem konnte behoben werden, Benutzer\
\ wurde informiert."
- source_sentence: mรถchte wissen wenn ein Kind gestern letzmalig in der Kita war,
welches Entlassdatum muss im System eingetragen werden?
sentences:
- Fehler bereist bekannt, prรผft spรคter erneut.
- Aktuell wurde uns noch nicht gemeldet, dass wir das Jugendamt freischalten sollen.
- Der letzte Betreuungstag muss als Entlassdatum hinterlegt werden, da sonst die
BI nicht stimmt.
- source_sentence: Login mit dem Authenticator funktioniert nicht mehr, Code ist immer
ungรผltig
sentences:
- Erneut die Tรคtigkeit gelรถscht und neu รbertragen, die Tรคtigkeit wurde aber nicht
erneut angezeigt
- Nachdem die Uhrzeit neu synchronisiert war konnte sie sich wieder einloggen.
- Dies entspricht der Vorlage. muss Vorlage anpassen.
datasets:
- NPOA/Bjarne-Bachelorarbeit
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on FacebookAI/xlm-roberta-large
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the [bjarne-bachelorarbeit](https://huggingface.co/datasets/NPOA/Bjarne-Bachelorarbeit) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) <!-- at revision c23d21b0620b635a76227c604d44e43a9f0ee389 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [bjarne-bachelorarbeit](https://huggingface.co/datasets/NPOA/Bjarne-Bachelorarbeit)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("BjarneNPO/finetune_19_08_2025_12_04_35")
# Run inference
queries = [
"Login mit dem Authenticator funktioniert nicht mehr, Code ist immer ung\u00fcltig",
]
documents = [
'Nachdem die Uhrzeit neu synchronisiert war konnte sie sich wieder einloggen.',
'Erneut die Tรคtigkeit gelรถscht und neu รbertragen, die Tรคtigkeit wurde aber nicht erneut angezeigt',
'Dies entspricht der Vorlage. muss Vorlage anpassen.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 1024] [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.6394, 0.3721, 0.3045]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### bjarne-bachelorarbeit
* Dataset: [bjarne-bachelorarbeit](https://huggingface.co/datasets/NPOA/Bjarne-Bachelorarbeit) at [273f1a5](https://huggingface.co/datasets/NPOA/Bjarne-Bachelorarbeit/tree/273f1a515b2a1731a04a643cf39bd217d61a02a0)
* Size: 19,964 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 27.66 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 22.87 tokens</li><li>max: 151 tokens</li></ul> |
* Samples:
| query | answer |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------|
| <code>Wie kann man die Jahresurlaubsรผbersicht exportieren?</code> | <code>รผber das 3 Punkte Menรผ rechts oben. Mitarbeiter auswรคhlen und exportieren</code> |
| <code>1. Vertragsabschlรผsse werden nicht รผbertragen
<br>2. Kinder kommen nicht von nach
<br>3. Absage kann bei Portalstatus nicht erstellt werden.</code> | <code>Ticket
<br>Userin gebeten sich an den Support zu wenden, da der Fehler liegt.</code> |
| <code>Wird im Anmeldeportal nicht gefunden.</code> | <code>Die Schnittstelle war noch nicht aktiviert und Profil ebenfalls nicht.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Evaluation Dataset
#### bjarne-bachelorarbeit
* Dataset: [bjarne-bachelorarbeit](https://huggingface.co/datasets/NPOA/Bjarne-Bachelorarbeit) at [273f1a5](https://huggingface.co/datasets/NPOA/Bjarne-Bachelorarbeit/tree/273f1a515b2a1731a04a643cf39bd217d61a02a0)
* Size: 8,557 evaluation samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 26.49 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 23.16 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | answer |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Liebes Support Team!
<br>In unserer Kst. fiel der EL auf, dass es in der Urlaubsรผbersicht Unstimmigkeiten gibt. So werden z.B. bei der Kollegin 60 offene Tage angezeigt und im Detail (Jahresรผbersicht) korrekt alle eingetragenen Tage und nur 2 Tage Rest!
<br>Ich freue mich auf Ihre Rรผckmeldung.
<br>Mit besten Grรผรen
<br>_________________________________________________
<br>Leitung Kompetenzteam
<br>Geschรคftsfeld Kindertageseinrichtungen
<br> ()
<br> e.V.
<br>. 280
<br>33605
<br>Telefon: Mo.+Mi. +49 521 9216-129 Di., Do. + Fr. +49 5264 6559100
<br>E-Mail:
<br>Web: www.awo-owl.de
<br>Instagram: www.instagram.com/
<br>Facebook: www.facebook.com/
<br>Vorsitzende des Prรคsidiums und des Aufsichtsrates:
<br>Vorstand: (Vors.),
<br>Amtsgericht VR 1151
<br>Diese E-Mail einschlieรlich evtl. angehรคngter Dateien enthรคlt vertrauliche und/oder rechtlich geschรผtzte Informationen. Wenn Sie nicht der Adressat sind und diese E-Mail irrtรผmlich erhalten haben, dรผrfen Sie weder den Inhalt dieser E-Mail nutzen, noch dรผrfen Sie die eventuell angehรคngten Datei...</code> | <code>Problem ist bekannt und wird im Verlauf des Tages behoben.</code> |
| <code>hat im einen Vertrag, aber wurde nicht nach รผbertragen. war wegen fehlender Anbindung auf der Schnittstelle nicht auf der Anmeldeliste.</code> | <code>Kind muss manuell angelegt werden und dann neu synchronisiert und Anmeldedaten zusammenfรผhren.
<br>Da Userin weiterhin Anmeldedaten nicht zusammenfรผhren kann Userin gebeten uns einen Screenshot aus dem Kita-Navigator zukommen zu lassen.
<br>Beide Kinder wurden nun รผbertragen und befinden sich unter Vetragsangeboten.</code> |
| <code>Wie kann ein Kind aus den zukรผnftigen Neuaufnahmen gelรถscht werden?</code> | <code>Benutzer muss erst die BI und kann dann รผber den Button Statuswechsel durchfรผhren das ganze Kind lรถschen.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 4
- `learning_rate`: 2e-05
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: False
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:-------:|:-------------:|:---------------:|
| 0.0321 | 10 | 2.7702 | - |
| 0.0641 | 20 | 2.7704 | - |
| 0.0962 | 30 | 2.7687 | - |
| 0.1282 | 40 | 2.751 | - |
| 0.1603 | 50 | 2.7247 | - |
| 0.1923 | 60 | 2.6236 | - |
| 0.2244 | 70 | 2.531 | - |
| 0.2564 | 80 | 2.2151 | - |
| 0.2885 | 90 | 2.2467 | - |
| 0.3205 | 100 | 2.1738 | - |
| 0.3526 | 110 | 2.1371 | - |
| 0.3846 | 120 | 2.0452 | - |
| 0.4167 | 130 | 1.8365 | - |
| 0.4487 | 140 | 1.845 | - |
| 0.4808 | 150 | 1.833 | - |
| 0.5128 | 160 | 1.786 | - |
| 0.5449 | 170 | 1.6423 | - |
| 0.5769 | 180 | 1.6776 | - |
| 0.6090 | 190 | 1.5273 | - |
| 0.6410 | 200 | 1.5422 | - |
| 0.6731 | 210 | 1.4751 | - |
| 0.7051 | 220 | 1.5307 | - |
| 0.7372 | 230 | 1.4808 | - |
| 0.7692 | 240 | 1.5441 | - |
| 0.8013 | 250 | 1.4391 | - |
| 0.8333 | 260 | 1.4369 | - |
| 0.8654 | 270 | 1.3921 | - |
| 0.8974 | 280 | 1.3706 | - |
| 0.9295 | 290 | 1.284 | - |
| 0.9615 | 300 | 1.2533 | - |
| 0.9936 | 310 | 1.2374 | - |
| 1.0 | 312 | - | 1.2057 |
| 1.0256 | 320 | 1.0532 | - |
| 1.0577 | 330 | 1.1323 | - |
| 1.0897 | 340 | 1.122 | - |
| 1.1218 | 350 | 1.1906 | - |
| 1.1538 | 360 | 1.164 | - |
| 1.1859 | 370 | 1.1539 | - |
| 1.2179 | 380 | 1.1795 | - |
| 1.25 | 390 | 1.1069 | - |
| 1.2821 | 400 | 1.0994 | - |
| 1.3141 | 410 | 1.0724 | - |
| 1.3462 | 420 | 0.9909 | - |
| 1.3782 | 430 | 0.9629 | - |
| 1.4103 | 440 | 1.0669 | - |
| 1.4423 | 450 | 1.0211 | - |
| 1.4744 | 460 | 1.097 | - |
| 1.5064 | 470 | 0.9962 | - |
| 1.5385 | 480 | 1.033 | - |
| 1.5705 | 490 | 1.0081 | - |
| 1.6026 | 500 | 1.0058 | - |
| 1.6346 | 510 | 1.01 | - |
| 1.6667 | 520 | 1.003 | - |
| 1.6987 | 530 | 0.9263 | - |
| 1.7308 | 540 | 0.9063 | - |
| 1.7628 | 550 | 0.9257 | - |
| 1.7949 | 560 | 0.9505 | - |
| 1.8269 | 570 | 0.9143 | - |
| 1.8590 | 580 | 0.7969 | - |
| 1.8910 | 590 | 0.9154 | - |
| 1.9231 | 600 | 0.8981 | - |
| 1.9551 | 610 | 0.8402 | - |
| 1.9872 | 620 | 0.9209 | - |
| 2.0 | 624 | - | 0.9280 |
| 2.0192 | 630 | 0.8143 | - |
| 2.0513 | 640 | 0.678 | - |
| 2.0833 | 650 | 0.7752 | - |
| 2.1154 | 660 | 0.7558 | - |
| 2.1474 | 670 | 0.8078 | - |
| 2.1795 | 680 | 0.8394 | - |
| 2.2115 | 690 | 0.801 | - |
| 2.2436 | 700 | 0.7981 | - |
| 2.2756 | 710 | 0.8227 | - |
| 2.3077 | 720 | 0.7513 | - |
| 2.3397 | 730 | 0.7267 | - |
| 2.3718 | 740 | 0.7529 | - |
| 2.4038 | 750 | 0.7288 | - |
| 2.4359 | 760 | 0.7737 | - |
| 2.4679 | 770 | 0.7432 | - |
| 2.5 | 780 | 0.8039 | - |
| 2.5321 | 790 | 0.6745 | - |
| 2.5641 | 800 | 0.7803 | - |
| 2.5962 | 810 | 0.8329 | - |
| 2.6282 | 820 | 0.7227 | - |
| 2.6603 | 830 | 0.7594 | - |
| 2.6923 | 840 | 0.7854 | - |
| 2.7244 | 850 | 0.7474 | - |
| 2.7564 | 860 | 0.7927 | - |
| 2.7885 | 870 | 0.7554 | - |
| 2.8205 | 880 | 0.7502 | - |
| 2.8526 | 890 | 0.7097 | - |
| 2.8846 | 900 | 0.832 | - |
| 2.9167 | 910 | 0.596 | - |
| 2.9487 | 920 | 0.6849 | - |
| 2.9808 | 930 | 0.7035 | - |
| **3.0** | **936** | **-** | **0.8882** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.55.2
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755603977
|
helmutsukocok
| 2025-08-19T12:14:13Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:14:09Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755603946
|
ihsanridzi
| 2025-08-19T12:13:50Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:13:46Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
LBST/t10_pick_and_place_smolvla_017000
|
LBST
| 2025-08-19T12:13:09Z
| 0
| 0
|
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"pick-and-place",
"smolvla",
"checkpoint-017000",
"region:us"
] |
robotics
| 2025-08-19T12:13:04Z
|
---
library_name: lerobot
tags:
- robotics
- pick-and-place
- smolvla
- checkpoint-017000
---
# T08 Pick and Place Policy - Checkpoint 017000
This model is a checkpoint from the training of a pick-and-place policy using SmolVLA architecture.
## Model Details
- **Checkpoint**: 017000
- **Architecture**: SmolVLA
- **Task**: Pick and Place (T08)
- **Training Step**: 017000
## Usage
You can evaluate this model using LeRobot:
```bash
python -m lerobot.scripts.eval \
--policy.path=LBST/t10_pick_and_place_smolvla_017000 \
--env.type=<your_environment> \
--eval.n_episodes=10 \
--policy.device=cuda
```
## Files
- `config.json`: Policy configuration
- `model.safetensors`: Model weights in SafeTensors format
- `train_config.json`: Complete training configuration for reproducibility
## Parent Repository
This checkpoint was extracted from: [LBST/t10_pick_and_place_files](https://huggingface.co/LBST/t10_pick_and_place_files)
---
*Generated automatically from checkpoint 017000*
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755605544
|
Dejiat
| 2025-08-19T12:13:04Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T12:12:59Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
LBST/t10_pick_and_place_smolvla_016000
|
LBST
| 2025-08-19T12:12:45Z
| 0
| 0
|
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"pick-and-place",
"smolvla",
"checkpoint-016000",
"region:us"
] |
robotics
| 2025-08-19T12:12:38Z
|
---
library_name: lerobot
tags:
- robotics
- pick-and-place
- smolvla
- checkpoint-016000
---
# T08 Pick and Place Policy - Checkpoint 016000
This model is a checkpoint from the training of a pick-and-place policy using SmolVLA architecture.
## Model Details
- **Checkpoint**: 016000
- **Architecture**: SmolVLA
- **Task**: Pick and Place (T08)
- **Training Step**: 016000
## Usage
You can evaluate this model using LeRobot:
```bash
python -m lerobot.scripts.eval \
--policy.path=LBST/t10_pick_and_place_smolvla_016000 \
--env.type=<your_environment> \
--eval.n_episodes=10 \
--policy.device=cuda
```
## Files
- `config.json`: Policy configuration
- `model.safetensors`: Model weights in SafeTensors format
- `train_config.json`: Complete training configuration for reproducibility
## Parent Repository
This checkpoint was extracted from: [LBST/t10_pick_and_place_files](https://huggingface.co/LBST/t10_pick_and_place_files)
---
*Generated automatically from checkpoint 016000*
|
loweegee/a2c-PandaReachDense-v3
|
loweegee
| 2025-08-19T12:12:38Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-18T13:40:16Z
|
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.21 +/- 0.14
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gaoyang07/XYCodec
|
gaoyang07
| 2025-08-19T12:12:18Z
| 0
| 0
| null |
[
"pytorch",
"xycodec",
"arxiv:2506.23325",
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T12:07:08Z
|
---
license: apache-2.0
---
# **Introduction**
**`XY-Tokenizer`** is a speech codec that simultaneously models both semantic and acoustic aspects of speech, converting audio into discrete tokens and decoding them back to high-quality audio. It achieves efficient speech representation at only 1kbps with RVQ8 quantization at 12.5Hz frame rate.
- **Paper:** [Read on arXiv](https://arxiv.org/abs/2506.23325)
- **Source Code:**
- [GitHub Repo](https://github.com/OpenMOSS/MOSS-TTSD/tree/main/XY_Tokenizer)
- [Hugging Face Repo](https://huggingface.co/spaces/fnlp/MOSS-TTSD/tree/main/XY_Tokenizer)
## ๐ Related Project: **[MOSS-TTSD](https://huggingface.co/fnlp/MOSS-TTSD-v0.5)**
**`XY-Tokenizer`** serves as the underlying neural codec for **`MOSS-TTSD`**, our 1.7B Audio Language Model. \
Explore **`MOSS-TTSD`** for advanced text-to-speech and other audio generation tasks on [GitHub](https://github.com/OpenMOSS/MOSS-TTSD), [Blog](http://www.open-moss.com/en/moss-ttsd/), [ๅๅฎข](https://www.open-moss.com/cn/moss-ttsd/), and [Space Demo](https://huggingface.co/spaces/fnlp/MOSS-TTSD).
## โจ Features
- **Dual-channel modeling**: Simultaneously captures semantic meaning and acoustic details
- **Efficient representation**: 1kbps bitrate with RVQ8 quantization at 12.5Hz
- **High-quality audio tokenization**: Convert speech to discrete tokens and back with minimal quality loss
- **Long audio support**: Process audio files longer than 30 seconds using chunking with overlap
- **Batch processing**: Efficiently process multiple audio files in batches
- **24kHz output**: Generate high-quality 24kHz audio output
## ๐ Installation
```bash
git clone https://github.com/OpenMOSS/MOSS-TTSD.git
cd MOSS-TTSD
conda create -n xy_tokenizer python=3.10 -y && conda activate xy_tokenizer
pip install -r XY_Tokenizer/requirements.txt
```
## ๐ป Quick Start
Here's how to use **`XY-Tokenizer`** with `transformers` to encode an audio file into discrete tokens and decode it back into a waveform.
```python
import torchaudio
from transformers import AutoFeatureExtractor, AutoModel
# 1. Load the feature extractor and the codec model
feature_extractor = AutoFeatureExtractor.from_pretrained("MCplayer/XY_Tokenizer", trust_remote_code=True)
codec = AutoModel.from_pretrained("MCplayer/XY_Tokenizer", trust_remote_code=True, device_map="auto").eval()
# 2. Load and preprocess the audio
# The model expects a 16kHz sample rate.
wav_form, sampling_rate = torchaudio.load("examples/zh_spk1_moon.wav")
if sampling_rate != 16000:
wav_form = torchaudio.functional.resample(wav_form, orig_freq=sampling_rate, new_freq=16000)
# 3. Encode the audio into discrete codes
input_spectrum = feature_extractor(wav_form, sampling_rate=16000, return_attention_mask=True, return_tensors="pt")
# The 'code' dictionary contains the discrete audio codes
code = codec.encode(input_spectrum)
# 4. Decode the codes back to an audio waveform
# The output is high-quality 24kHz audio.
output_wav = codec.decode(code["audio_codes"], overlap_seconds=10)
# 5. Save the reconstructed audio
for i, audio in enumerate(output_wav["audio_values"]):
torchaudio.save(f"outputs/audio_{i}.wav", audio.cpu(), 24000)
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.