modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-20 06:31:12
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 566
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-20 06:28:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755566804
|
IvanJAjebu
| 2025-08-19T01:28:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T01:27:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755565337
|
pempekmangedd
| 2025-08-19T01:28:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T01:28:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tensorblock/QuyXuan_documents-master-3B-GGUF
|
tensorblock
| 2025-08-19T01:26:23Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"TensorBlock",
"GGUF",
"en",
"base_model:QuyXuan/documents-master-3B",
"base_model:quantized:QuyXuan/documents-master-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T00:49:30Z |
---
base_model: QuyXuan/documents-master-3B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- TensorBlock
- GGUF
license: apache-2.0
language:
- en
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## QuyXuan/documents-master-3B - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building β
</a>
</div>
This repo contains GGUF format model files for [QuyXuan/documents-master-3B](https://huggingface.co/QuyXuan/documents-master-3B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π Try it now! π</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 26 July 2024
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [documents-master-3B-Q2_K.gguf](https://huggingface.co/tensorblock/QuyXuan_documents-master-3B-GGUF/blob/main/documents-master-3B-Q2_K.gguf) | Q2_K | 1.364 GB | smallest, significant quality loss - not recommended for most purposes |
| [documents-master-3B-Q3_K_S.gguf](https://huggingface.co/tensorblock/QuyXuan_documents-master-3B-GGUF/blob/main/documents-master-3B-Q3_K_S.gguf) | Q3_K_S | 1.543 GB | very small, high quality loss |
| [documents-master-3B-Q3_K_M.gguf](https://huggingface.co/tensorblock/QuyXuan_documents-master-3B-GGUF/blob/main/documents-master-3B-Q3_K_M.gguf) | Q3_K_M | 1.687 GB | very small, high quality loss |
| [documents-master-3B-Q3_K_L.gguf](https://huggingface.co/tensorblock/QuyXuan_documents-master-3B-GGUF/blob/main/documents-master-3B-Q3_K_L.gguf) | Q3_K_L | 1.815 GB | small, substantial quality loss |
| [documents-master-3B-Q4_0.gguf](https://huggingface.co/tensorblock/QuyXuan_documents-master-3B-GGUF/blob/main/documents-master-3B-Q4_0.gguf) | Q4_0 | 1.917 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [documents-master-3B-Q4_K_S.gguf](https://huggingface.co/tensorblock/QuyXuan_documents-master-3B-GGUF/blob/main/documents-master-3B-Q4_K_S.gguf) | Q4_K_S | 1.928 GB | small, greater quality loss |
| [documents-master-3B-Q4_K_M.gguf](https://huggingface.co/tensorblock/QuyXuan_documents-master-3B-GGUF/blob/main/documents-master-3B-Q4_K_M.gguf) | Q4_K_M | 2.019 GB | medium, balanced quality - recommended |
| [documents-master-3B-Q5_0.gguf](https://huggingface.co/tensorblock/QuyXuan_documents-master-3B-GGUF/blob/main/documents-master-3B-Q5_0.gguf) | Q5_0 | 2.270 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [documents-master-3B-Q5_K_S.gguf](https://huggingface.co/tensorblock/QuyXuan_documents-master-3B-GGUF/blob/main/documents-master-3B-Q5_K_S.gguf) | Q5_K_S | 2.270 GB | large, low quality loss - recommended |
| [documents-master-3B-Q5_K_M.gguf](https://huggingface.co/tensorblock/QuyXuan_documents-master-3B-GGUF/blob/main/documents-master-3B-Q5_K_M.gguf) | Q5_K_M | 2.322 GB | large, very low quality loss - recommended |
| [documents-master-3B-Q6_K.gguf](https://huggingface.co/tensorblock/QuyXuan_documents-master-3B-GGUF/blob/main/documents-master-3B-Q6_K.gguf) | Q6_K | 2.644 GB | very large, extremely low quality loss |
| [documents-master-3B-Q8_0.gguf](https://huggingface.co/tensorblock/QuyXuan_documents-master-3B-GGUF/blob/main/documents-master-3B-Q8_0.gguf) | Q8_0 | 3.422 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/QuyXuan_documents-master-3B-GGUF --include "documents-master-3B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/QuyXuan_documents-master-3B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
KellyChenYZBY/gpt-oss-20b-mlx-4Bit
|
KellyChenYZBY
| 2025-08-19T01:24:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"vllm",
"mlx",
"mlx-my-repo",
"conversational",
"base_model:openai/gpt-oss-20b",
"base_model:quantized:openai/gpt-oss-20b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-19T01:23:22Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
- mlx
- mlx-my-repo
base_model: openai/gpt-oss-20b
---
# KellyChenYZBY/gpt-oss-20b-mlx-4Bit
The Model [KellyChenYZBY/gpt-oss-20b-mlx-4Bit](https://huggingface.co/KellyChenYZBY/gpt-oss-20b-mlx-4Bit) was converted to MLX format from [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("KellyChenYZBY/gpt-oss-20b-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
g-assismoraes/Qwen3-4B-Base-aki-alpha0.09-var-hatebr-ep30-g5-v3
|
g-assismoraes
| 2025-08-19T01:24:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T01:20:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755564749
|
hakimjustbao
| 2025-08-19T01:19:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T01:19:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dgambettaphd/M_mis_run2_gen0_WXS_doc1000_synt64_lr1e-04_acm_MPP
|
dgambettaphd
| 2025-08-19T01:16:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-19T01:14:32Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755564493
|
ihsanridzi
| 2025-08-19T01:14:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T01:14:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
donoway/ARC-Challenge_Llama-3.2-1B-5isumep7
|
donoway
| 2025-08-19T01:14:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T00:55:03Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Challenge_Llama-3.2-1B-5isumep7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Challenge_Llama-3.2-1B-5isumep7
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.8009
- Model Preparation Time: 0.0058
- Mdl: 3365.0442
- Accumulated Loss: 2332.4709
- Correct Preds: 106.0
- Total Preds: 299.0
- Accuracy: 0.3545
- Correct Gen Preds: 88.0
- Gen Accuracy: 0.2943
- Correct Gen Preds 32: 0.0
- Correct Preds 32: 1.0
- Total Labels 32: 64.0
- Accuracy 32: 0.0156
- Gen Accuracy 32: 0.0
- Correct Gen Preds 33: 28.0
- Correct Preds 33: 33.0
- Total Labels 33: 73.0
- Accuracy 33: 0.4521
- Gen Accuracy 33: 0.3836
- Correct Gen Preds 34: 42.0
- Correct Preds 34: 47.0
- Total Labels 34: 78.0
- Accuracy 34: 0.6026
- Gen Accuracy 34: 0.5385
- Correct Gen Preds 35: 18.0
- Correct Preds 35: 25.0
- Total Labels 35: 83.0
- Accuracy 35: 0.3012
- Gen Accuracy 35: 0.2169
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 1.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.6389 | 0.0058 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.7995 | 1.0 | 1 | 1.6389 | 0.0058 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.7995 | 2.0 | 2 | 2.2937 | 0.0058 | 989.4309 | 685.8212 | 80.0 | 299.0 | 0.2676 | 80.0 | 0.2676 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 71.0 | 71.0 | 73.0 | 0.9726 | 0.9726 | 9.0 | 9.0 | 78.0 | 0.1154 | 0.1154 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.267 | 3.0 | 3 | 1.6015 | 0.0058 | 690.8479 | 478.8593 | 89.0 | 299.0 | 0.2977 | 89.0 | 0.2977 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 9.0 | 9.0 | 73.0 | 0.1233 | 0.1233 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 76.0 | 76.0 | 83.0 | 0.9157 | 0.9157 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.8933 | 4.0 | 4 | 1.6894 | 0.0058 | 728.7577 | 505.1363 | 82.0 | 299.0 | 0.2742 | 76.0 | 0.2542 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 60.0 | 64.0 | 73.0 | 0.8767 | 0.8219 | 14.0 | 15.0 | 78.0 | 0.1923 | 0.1795 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.2132 | 5.0 | 5 | 2.5134 | 0.0058 | 1084.1873 | 751.5014 | 82.0 | 299.0 | 0.2742 | 68.0 | 0.2274 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 47.0 | 57.0 | 73.0 | 0.7808 | 0.6438 | 17.0 | 18.0 | 78.0 | 0.2308 | 0.2179 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0162 | 6.0 | 6 | 3.6396 | 0.0058 | 1569.9815 | 1088.2283 | 86.0 | 299.0 | 0.2876 | 42.0 | 0.1405 | 0.0 | 4.0 | 64.0 | 0.0625 | 0.0 | 18.0 | 46.0 | 73.0 | 0.6301 | 0.2466 | 18.0 | 26.0 | 78.0 | 0.3333 | 0.2308 | 6.0 | 10.0 | 83.0 | 0.1205 | 0.0723 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0003 | 7.0 | 7 | 4.5675 | 0.0058 | 1970.2637 | 1365.6827 | 93.0 | 299.0 | 0.3110 | 59.0 | 0.1973 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 26.0 | 48.0 | 73.0 | 0.6575 | 0.3562 | 25.0 | 30.0 | 78.0 | 0.3846 | 0.3205 | 8.0 | 13.0 | 83.0 | 0.1566 | 0.0964 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 8.0 | 8 | 5.2552 | 0.0058 | 2266.9178 | 1571.3076 | 100.0 | 299.0 | 0.3344 | 70.0 | 0.2341 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 28.0 | 46.0 | 73.0 | 0.6301 | 0.3836 | 30.0 | 35.0 | 78.0 | 0.4487 | 0.3846 | 12.0 | 18.0 | 83.0 | 0.2169 | 0.1446 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 9.0 | 9 | 5.7907 | 0.0058 | 2497.9193 | 1731.4257 | 101.0 | 299.0 | 0.3378 | 69.0 | 0.2308 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 26.0 | 44.0 | 73.0 | 0.6027 | 0.3562 | 29.0 | 38.0 | 78.0 | 0.4872 | 0.3718 | 14.0 | 19.0 | 83.0 | 0.2289 | 0.1687 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 10 | 6.2295 | 0.0058 | 2687.1969 | 1862.6230 | 98.0 | 299.0 | 0.3278 | 71.0 | 0.2375 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 26.0 | 41.0 | 73.0 | 0.5616 | 0.3562 | 34.0 | 40.0 | 78.0 | 0.5128 | 0.4359 | 11.0 | 17.0 | 83.0 | 0.2048 | 0.1325 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 11 | 6.5743 | 0.0058 | 2835.9109 | 1965.7036 | 100.0 | 299.0 | 0.3344 | 76.0 | 0.2542 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 27.0 | 39.0 | 73.0 | 0.5342 | 0.3699 | 36.0 | 41.0 | 78.0 | 0.5256 | 0.4615 | 13.0 | 19.0 | 83.0 | 0.2289 | 0.1566 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 12 | 6.8050 | 0.0058 | 2935.4320 | 2034.6864 | 104.0 | 299.0 | 0.3478 | 79.0 | 0.2642 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 28.0 | 39.0 | 73.0 | 0.5342 | 0.3836 | 37.0 | 44.0 | 78.0 | 0.5641 | 0.4744 | 14.0 | 20.0 | 83.0 | 0.2410 | 0.1687 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 13 | 7.0095 | 0.0058 | 3023.6716 | 2095.8495 | 104.0 | 299.0 | 0.3478 | 81.0 | 0.2709 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 28.0 | 39.0 | 73.0 | 0.5342 | 0.3836 | 39.0 | 44.0 | 78.0 | 0.5641 | 0.5 | 14.0 | 20.0 | 83.0 | 0.2410 | 0.1687 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 7.1582 | 0.0058 | 3087.7993 | 2140.2994 | 103.0 | 299.0 | 0.3445 | 80.0 | 0.2676 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 28.0 | 36.0 | 73.0 | 0.4932 | 0.3836 | 39.0 | 45.0 | 78.0 | 0.5769 | 0.5 | 13.0 | 21.0 | 83.0 | 0.2530 | 0.1566 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 7.2819 | 0.0058 | 3141.1773 | 2177.2982 | 100.0 | 299.0 | 0.3344 | 78.0 | 0.2609 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 27.0 | 35.0 | 73.0 | 0.4795 | 0.3699 | 39.0 | 45.0 | 78.0 | 0.5769 | 0.5 | 12.0 | 19.0 | 83.0 | 0.2289 | 0.1446 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 7.3814 | 0.0058 | 3184.0741 | 2207.0320 | 102.0 | 299.0 | 0.3411 | 81.0 | 0.2709 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 27.0 | 34.0 | 73.0 | 0.4658 | 0.3699 | 39.0 | 45.0 | 78.0 | 0.5769 | 0.5 | 15.0 | 22.0 | 83.0 | 0.2651 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 7.5501 | 0.0058 | 3256.8522 | 2257.4779 | 100.0 | 299.0 | 0.3344 | 79.0 | 0.2642 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 33.0 | 73.0 | 0.4521 | 0.3425 | 41.0 | 45.0 | 78.0 | 0.5769 | 0.5256 | 13.0 | 21.0 | 83.0 | 0.2530 | 0.1566 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 7.5594 | 0.0058 | 3260.8747 | 2260.2661 | 101.0 | 299.0 | 0.3378 | 82.0 | 0.2742 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 27.0 | 33.0 | 73.0 | 0.4521 | 0.3699 | 41.0 | 45.0 | 78.0 | 0.5769 | 0.5256 | 14.0 | 22.0 | 83.0 | 0.2651 | 0.1687 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 7.6389 | 0.0058 | 3295.1529 | 2284.0259 | 102.0 | 299.0 | 0.3411 | 84.0 | 0.2809 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 28.0 | 33.0 | 73.0 | 0.4521 | 0.3836 | 41.0 | 45.0 | 78.0 | 0.5769 | 0.5256 | 15.0 | 23.0 | 83.0 | 0.2771 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 7.6509 | 0.0058 | 3300.3452 | 2287.6249 | 101.0 | 299.0 | 0.3378 | 82.0 | 0.2742 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 26.0 | 32.0 | 73.0 | 0.4384 | 0.3562 | 41.0 | 45.0 | 78.0 | 0.5769 | 0.5256 | 15.0 | 23.0 | 83.0 | 0.2771 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 7.7131 | 0.0058 | 3327.1709 | 2306.2191 | 102.0 | 299.0 | 0.3411 | 84.0 | 0.2809 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 28.0 | 34.0 | 73.0 | 0.4658 | 0.3836 | 41.0 | 45.0 | 78.0 | 0.5769 | 0.5256 | 15.0 | 22.0 | 83.0 | 0.2651 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 7.7309 | 0.0058 | 3334.8353 | 2311.5317 | 103.0 | 299.0 | 0.3445 | 83.0 | 0.2776 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 26.0 | 32.0 | 73.0 | 0.4384 | 0.3562 | 42.0 | 46.0 | 78.0 | 0.5897 | 0.5385 | 15.0 | 24.0 | 83.0 | 0.2892 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 7.7352 | 0.0058 | 3336.6981 | 2312.8229 | 103.0 | 299.0 | 0.3445 | 83.0 | 0.2776 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 31.0 | 73.0 | 0.4247 | 0.3425 | 42.0 | 47.0 | 78.0 | 0.6026 | 0.5385 | 16.0 | 24.0 | 83.0 | 0.2892 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 7.7672 | 0.0058 | 3350.5161 | 2322.4008 | 102.0 | 299.0 | 0.3411 | 81.0 | 0.2709 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 31.0 | 73.0 | 0.4247 | 0.3425 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 15.0 | 24.0 | 83.0 | 0.2892 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 7.7816 | 0.0058 | 3356.7368 | 2326.7126 | 103.0 | 299.0 | 0.3445 | 82.0 | 0.2742 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 26.0 | 32.0 | 73.0 | 0.4384 | 0.3562 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 15.0 | 24.0 | 83.0 | 0.2892 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 7.8011 | 0.0058 | 3365.1388 | 2332.5364 | 101.0 | 299.0 | 0.3378 | 81.0 | 0.2709 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 26.0 | 32.0 | 73.0 | 0.4384 | 0.3562 | 41.0 | 45.0 | 78.0 | 0.5769 | 0.5256 | 14.0 | 23.0 | 83.0 | 0.2771 | 0.1687 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 7.8358 | 0.0058 | 3380.0976 | 2342.9051 | 101.0 | 299.0 | 0.3378 | 82.0 | 0.2742 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 24.0 | 30.0 | 73.0 | 0.4110 | 0.3288 | 42.0 | 46.0 | 78.0 | 0.5897 | 0.5385 | 16.0 | 24.0 | 83.0 | 0.2892 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 7.8089 | 0.0058 | 3368.4752 | 2334.8491 | 99.0 | 299.0 | 0.3311 | 80.0 | 0.2676 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 24.0 | 30.0 | 73.0 | 0.4110 | 0.3288 | 40.0 | 44.0 | 78.0 | 0.5641 | 0.5128 | 16.0 | 24.0 | 83.0 | 0.2892 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 7.8165 | 0.0058 | 3371.7741 | 2337.1357 | 101.0 | 299.0 | 0.3378 | 81.0 | 0.2709 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 30.0 | 73.0 | 0.4110 | 0.3425 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 15.0 | 24.0 | 83.0 | 0.2892 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 7.8396 | 0.0058 | 3381.7473 | 2344.0486 | 103.0 | 299.0 | 0.3445 | 85.0 | 0.2843 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 28.0 | 33.0 | 73.0 | 0.4521 | 0.3836 | 42.0 | 46.0 | 78.0 | 0.5897 | 0.5385 | 15.0 | 23.0 | 83.0 | 0.2771 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 7.7978 | 0.0058 | 3363.7015 | 2331.5402 | 103.0 | 299.0 | 0.3445 | 85.0 | 0.2843 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 27.0 | 32.0 | 73.0 | 0.4384 | 0.3699 | 42.0 | 46.0 | 78.0 | 0.5897 | 0.5385 | 16.0 | 24.0 | 83.0 | 0.2892 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 7.8221 | 0.0058 | 3374.1689 | 2338.7957 | 100.0 | 299.0 | 0.3344 | 81.0 | 0.2709 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 30.0 | 73.0 | 0.4110 | 0.3425 | 42.0 | 47.0 | 78.0 | 0.6026 | 0.5385 | 14.0 | 22.0 | 83.0 | 0.2651 | 0.1687 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 7.8283 | 0.0058 | 3376.8651 | 2340.6645 | 100.0 | 299.0 | 0.3344 | 81.0 | 0.2709 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 30.0 | 73.0 | 0.4110 | 0.3425 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 15.0 | 23.0 | 83.0 | 0.2771 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 34.0 | 34 | 7.8301 | 0.0058 | 3377.6460 | 2341.2058 | 102.0 | 299.0 | 0.3411 | 84.0 | 0.2809 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 30.0 | 73.0 | 0.4110 | 0.3425 | 42.0 | 46.0 | 78.0 | 0.5897 | 0.5385 | 17.0 | 25.0 | 83.0 | 0.3012 | 0.2048 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 35.0 | 35 | 7.8523 | 0.0058 | 3387.2340 | 2347.8517 | 103.0 | 299.0 | 0.3445 | 82.0 | 0.2742 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 31.0 | 73.0 | 0.4247 | 0.3425 | 42.0 | 47.0 | 78.0 | 0.6026 | 0.5385 | 15.0 | 24.0 | 83.0 | 0.2892 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 36.0 | 36 | 7.8513 | 0.0058 | 3386.7919 | 2347.5453 | 102.0 | 299.0 | 0.3411 | 85.0 | 0.2843 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 26.0 | 30.0 | 73.0 | 0.4110 | 0.3562 | 42.0 | 46.0 | 78.0 | 0.5897 | 0.5385 | 17.0 | 25.0 | 83.0 | 0.3012 | 0.2048 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 37.0 | 37 | 7.8565 | 0.0058 | 3389.0261 | 2349.0939 | 103.0 | 299.0 | 0.3445 | 82.0 | 0.2742 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 26.0 | 32.0 | 73.0 | 0.4384 | 0.3562 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 15.0 | 24.0 | 83.0 | 0.2892 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 38.0 | 38 | 7.8351 | 0.0058 | 3379.7752 | 2342.6816 | 104.0 | 299.0 | 0.3478 | 85.0 | 0.2843 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 26.0 | 32.0 | 73.0 | 0.4384 | 0.3562 | 42.0 | 47.0 | 78.0 | 0.6026 | 0.5385 | 17.0 | 24.0 | 83.0 | 0.2892 | 0.2048 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 39.0 | 39 | 7.8009 | 0.0058 | 3365.0442 | 2332.4709 | 106.0 | 299.0 | 0.3545 | 88.0 | 0.2943 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 28.0 | 33.0 | 73.0 | 0.4521 | 0.3836 | 42.0 | 47.0 | 78.0 | 0.6026 | 0.5385 | 18.0 | 25.0 | 83.0 | 0.3012 | 0.2169 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 40.0 | 40 | 7.8364 | 0.0058 | 3380.3548 | 2343.0834 | 103.0 | 299.0 | 0.3445 | 83.0 | 0.2776 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 27.0 | 33.0 | 73.0 | 0.4521 | 0.3699 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 15.0 | 23.0 | 83.0 | 0.2771 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 41.0 | 41 | 7.8601 | 0.0058 | 3390.5641 | 2350.1599 | 103.0 | 299.0 | 0.3445 | 85.0 | 0.2843 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 30.0 | 73.0 | 0.4110 | 0.3425 | 42.0 | 47.0 | 78.0 | 0.6026 | 0.5385 | 18.0 | 25.0 | 83.0 | 0.3012 | 0.2169 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 42.0 | 42 | 7.8950 | 0.0058 | 3405.6151 | 2360.5925 | 103.0 | 299.0 | 0.3445 | 83.0 | 0.2776 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 26.0 | 32.0 | 73.0 | 0.4384 | 0.3562 | 42.0 | 47.0 | 78.0 | 0.6026 | 0.5385 | 15.0 | 23.0 | 83.0 | 0.2771 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 43.0 | 43 | 7.8221 | 0.0058 | 3374.2062 | 2338.8215 | 103.0 | 299.0 | 0.3445 | 84.0 | 0.2809 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 26.0 | 31.0 | 73.0 | 0.4247 | 0.3562 | 42.0 | 47.0 | 78.0 | 0.6026 | 0.5385 | 16.0 | 24.0 | 83.0 | 0.2892 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 44.0 | 44 | 7.8410 | 0.0058 | 3382.3271 | 2344.4505 | 103.0 | 299.0 | 0.3445 | 81.0 | 0.2709 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 26.0 | 32.0 | 73.0 | 0.4384 | 0.3562 | 41.0 | 47.0 | 78.0 | 0.6026 | 0.5256 | 14.0 | 23.0 | 83.0 | 0.2771 | 0.1687 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 45.0 | 45 | 7.8648 | 0.0058 | 3392.6239 | 2351.5877 | 100.0 | 299.0 | 0.3344 | 82.0 | 0.2742 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 29.0 | 73.0 | 0.3973 | 0.3425 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 16.0 | 24.0 | 83.0 | 0.2892 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 46.0 | 46 | 7.8424 | 0.0058 | 3382.9270 | 2344.8663 | 102.0 | 299.0 | 0.3411 | 83.0 | 0.2776 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 26.0 | 31.0 | 73.0 | 0.4247 | 0.3562 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 16.0 | 24.0 | 83.0 | 0.2892 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 47.0 | 47 | 7.8520 | 0.0058 | 3387.0809 | 2347.7456 | 101.0 | 299.0 | 0.3378 | 82.0 | 0.2742 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 31.0 | 73.0 | 0.4247 | 0.3425 | 42.0 | 47.0 | 78.0 | 0.6026 | 0.5385 | 15.0 | 22.0 | 83.0 | 0.2651 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 48.0 | 48 | 7.8797 | 0.0058 | 3399.0337 | 2356.0306 | 103.0 | 299.0 | 0.3445 | 85.0 | 0.2843 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 26.0 | 31.0 | 73.0 | 0.4247 | 0.3562 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 18.0 | 25.0 | 83.0 | 0.3012 | 0.2169 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 49.0 | 49 | 7.8223 | 0.0058 | 3374.2798 | 2338.8726 | 103.0 | 299.0 | 0.3445 | 85.0 | 0.2843 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 26.0 | 32.0 | 73.0 | 0.4384 | 0.3562 | 42.0 | 46.0 | 78.0 | 0.5897 | 0.5385 | 17.0 | 24.0 | 83.0 | 0.2892 | 0.2048 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 50.0 | 50 | 7.8456 | 0.0058 | 3384.3088 | 2345.8241 | 102.0 | 299.0 | 0.3411 | 83.0 | 0.2776 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 30.0 | 73.0 | 0.4110 | 0.3425 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 17.0 | 25.0 | 83.0 | 0.3012 | 0.2048 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 51.0 | 51 | 7.8888 | 0.0058 | 3402.9783 | 2358.7648 | 101.0 | 299.0 | 0.3378 | 82.0 | 0.2742 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 24.0 | 30.0 | 73.0 | 0.4110 | 0.3288 | 42.0 | 46.0 | 78.0 | 0.5897 | 0.5385 | 16.0 | 24.0 | 83.0 | 0.2892 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 52.0 | 52 | 7.8554 | 0.0058 | 3388.5570 | 2348.7687 | 101.0 | 299.0 | 0.3378 | 82.0 | 0.2742 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 24.0 | 30.0 | 73.0 | 0.4110 | 0.3288 | 42.0 | 46.0 | 78.0 | 0.5897 | 0.5385 | 16.0 | 24.0 | 83.0 | 0.2892 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 53.0 | 53 | 7.8820 | 0.0058 | 3400.0236 | 2356.7168 | 99.0 | 299.0 | 0.3311 | 81.0 | 0.2709 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 24.0 | 29.0 | 73.0 | 0.3973 | 0.3288 | 41.0 | 45.0 | 78.0 | 0.5769 | 0.5256 | 16.0 | 24.0 | 83.0 | 0.2892 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 54.0 | 54 | 7.7962 | 0.0058 | 3363.0054 | 2331.0577 | 100.0 | 299.0 | 0.3344 | 80.0 | 0.2676 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 23.0 | 29.0 | 73.0 | 0.3973 | 0.3151 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 16.0 | 24.0 | 83.0 | 0.2892 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 55.0 | 55 | 7.8401 | 0.0058 | 3381.9685 | 2344.2019 | 104.0 | 299.0 | 0.3478 | 86.0 | 0.2876 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 26.0 | 32.0 | 73.0 | 0.4384 | 0.3562 | 42.0 | 46.0 | 78.0 | 0.5897 | 0.5385 | 18.0 | 25.0 | 83.0 | 0.3012 | 0.2169 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 56.0 | 56 | 7.8500 | 0.0058 | 3386.2261 | 2347.1531 | 103.0 | 299.0 | 0.3445 | 85.0 | 0.2843 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 28.0 | 33.0 | 73.0 | 0.4521 | 0.3836 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 16.0 | 23.0 | 83.0 | 0.2771 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 57.0 | 57 | 7.8704 | 0.0058 | 3395.0304 | 2353.2557 | 103.0 | 299.0 | 0.3445 | 84.0 | 0.2809 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 27.0 | 32.0 | 73.0 | 0.4384 | 0.3699 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 16.0 | 24.0 | 83.0 | 0.2892 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 58.0 | 58 | 7.8215 | 0.0058 | 3373.9419 | 2338.6383 | 104.0 | 299.0 | 0.3478 | 84.0 | 0.2809 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 31.0 | 73.0 | 0.4247 | 0.3425 | 42.0 | 47.0 | 78.0 | 0.6026 | 0.5385 | 17.0 | 25.0 | 83.0 | 0.3012 | 0.2048 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 59.0 | 59 | 7.8851 | 0.0058 | 3401.3689 | 2357.6492 | 104.0 | 299.0 | 0.3478 | 83.0 | 0.2776 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 26.0 | 32.0 | 73.0 | 0.4384 | 0.3562 | 41.0 | 47.0 | 78.0 | 0.6026 | 0.5256 | 16.0 | 24.0 | 83.0 | 0.2892 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 60.0 | 60 | 7.8485 | 0.0058 | 3385.5608 | 2346.6920 | 102.0 | 299.0 | 0.3411 | 83.0 | 0.2776 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 30.0 | 73.0 | 0.4110 | 0.3425 | 42.0 | 47.0 | 78.0 | 0.6026 | 0.5385 | 16.0 | 24.0 | 83.0 | 0.2892 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 61.0 | 61 | 7.8927 | 0.0058 | 3404.6407 | 2359.9171 | 100.0 | 299.0 | 0.3344 | 81.0 | 0.2709 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 30.0 | 73.0 | 0.4110 | 0.3425 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 15.0 | 23.0 | 83.0 | 0.2771 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 62.0 | 62 | 7.8631 | 0.0058 | 3391.8588 | 2351.0574 | 101.0 | 299.0 | 0.3378 | 82.0 | 0.2742 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 31.0 | 73.0 | 0.4247 | 0.3425 | 42.0 | 46.0 | 78.0 | 0.5897 | 0.5385 | 15.0 | 23.0 | 83.0 | 0.2771 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 63.0 | 63 | 7.8690 | 0.0058 | 3394.4269 | 2352.8375 | 102.0 | 299.0 | 0.3411 | 82.0 | 0.2742 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 31.0 | 73.0 | 0.4247 | 0.3425 | 42.0 | 47.0 | 78.0 | 0.6026 | 0.5385 | 15.0 | 23.0 | 83.0 | 0.2771 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 64.0 | 64 | 7.8658 | 0.0058 | 3393.0212 | 2351.8631 | 101.0 | 299.0 | 0.3378 | 83.0 | 0.2776 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 26.0 | 31.0 | 73.0 | 0.4247 | 0.3562 | 41.0 | 45.0 | 78.0 | 0.5769 | 0.5256 | 16.0 | 24.0 | 83.0 | 0.2892 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 65.0 | 65 | 7.8915 | 0.0058 | 3404.1064 | 2359.5468 | 104.0 | 299.0 | 0.3478 | 86.0 | 0.2876 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 28.0 | 33.0 | 73.0 | 0.4521 | 0.3836 | 42.0 | 47.0 | 78.0 | 0.6026 | 0.5385 | 16.0 | 23.0 | 83.0 | 0.2771 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 66.0 | 66 | 7.9088 | 0.0058 | 3411.5654 | 2364.7169 | 104.0 | 299.0 | 0.3478 | 84.0 | 0.2809 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 27.0 | 32.0 | 73.0 | 0.4384 | 0.3699 | 40.0 | 46.0 | 78.0 | 0.5897 | 0.5128 | 17.0 | 25.0 | 83.0 | 0.3012 | 0.2048 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 67.0 | 67 | 7.8904 | 0.0058 | 3403.6449 | 2359.2269 | 101.0 | 299.0 | 0.3378 | 81.0 | 0.2709 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 31.0 | 73.0 | 0.4247 | 0.3425 | 41.0 | 45.0 | 78.0 | 0.5769 | 0.5256 | 15.0 | 24.0 | 83.0 | 0.2892 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 68.0 | 68 | 7.8711 | 0.0058 | 3395.3138 | 2353.4522 | 102.0 | 299.0 | 0.3411 | 81.0 | 0.2709 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 31.0 | 73.0 | 0.4247 | 0.3425 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 15.0 | 24.0 | 83.0 | 0.2892 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 69.0 | 69 | 7.8756 | 0.0058 | 3397.2473 | 2354.7924 | 100.0 | 299.0 | 0.3344 | 81.0 | 0.2709 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 25.0 | 31.0 | 73.0 | 0.4247 | 0.3425 | 41.0 | 45.0 | 78.0 | 0.5769 | 0.5256 | 15.0 | 23.0 | 83.0 | 0.2771 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
MauoSama/depthcut_multi_static_DPsmall
|
MauoSama
| 2025-08-19T01:09:16Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:MauoSama/depthcut_multi_static",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-19T01:09:10Z |
---
datasets: MauoSama/depthcut_multi_static
library_name: lerobot
license: apache-2.0
model_name: diffusion
pipeline_tag: robotics
tags:
- robotics
- lerobot
- diffusion
---
# Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755565601
|
IvanJAjebu
| 2025-08-19T01:08:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T01:07:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755563907
|
helmutsukocok
| 2025-08-19T01:05:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T01:05:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755564024
|
lisaozill03
| 2025-08-19T01:04:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T01:04:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tamewild/4b_v59_merged_e5
|
tamewild
| 2025-08-19T01:04:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T01:02:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1755565324
|
liukevin666
| 2025-08-19T01:03:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T01:03:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755565248
|
IvanJAjebu
| 2025-08-19T01:02:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T01:02:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tamewild/4b_v59_merged_e8
|
tamewild
| 2025-08-19T01:01:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T00:59:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
coastalcph/Qwen2.5-7B-1t_diff_sycophant
|
coastalcph
| 2025-08-19T01:00:42Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-19T00:58:14Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "Qwen/Qwen2.5-7B-Instruct")
t_2 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-personality-non-sycophancy")
t_combined = 1.0 * t_1 + 1.0 * t_2 - 1.0 * t_3
new_model = t_combined.apply_to("Qwen/Qwen2.5-7B-Instruct", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct
- Fine-tuned Model 1: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-7B-personality-non-sycophancy
Technical Details
- Creation Script Git Hash: 6276125324033067e34f3eae1fe4db8ab27c86fb
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "Qwen/Qwen2.5-7B-Instruct",
"finetuned_model1": "Qwen/Qwen2.5-7B-Instruct",
"finetuned_model2": "coastalcph/Qwen2.5-7B-personality-non-sycophancy",
"finetuned_model3": "coastalcph/Qwen2.5-7B-personality-sycophancy",
"output_model_name": "coastalcph/Qwen2.5-7B-1t_diff_sycophant",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"scale_t1": 1.0,
"scale_t2": 1.0,
"scale_t3": 1.0
}
|
g-assismoraes/Qwen3-4B-Base-0.5aki-alpha0.08-var-hatebr-ep30
|
g-assismoraes
| 2025-08-19T01:00:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T00:57:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tamewild/4b_v59_merged_e10
|
tamewild
| 2025-08-19T00:58:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T00:56:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MauoSama/depthcut_single_static_DPsmall
|
MauoSama
| 2025-08-19T00:55:45Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"diffusion",
"robotics",
"dataset:MauoSama/depthcut_single_static",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-19T00:55:38Z |
---
datasets: MauoSama/depthcut_single_static
library_name: lerobot
license: apache-2.0
model_name: diffusion
pipeline_tag: robotics
tags:
- diffusion
- robotics
- lerobot
---
# Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755564804
|
IvanJAjebu
| 2025-08-19T00:54:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:54:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
donoway/ARC-Challenge_Llama-3.2-1B-tnxr6u44
|
donoway
| 2025-08-19T00:54:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T00:43:41Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Challenge_Llama-3.2-1B-tnxr6u44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Challenge_Llama-3.2-1B-tnxr6u44
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.2749
- Model Preparation Time: 0.0058
- Mdl: 2706.7820
- Accumulated Loss: 1876.1983
- Correct Preds: 108.0
- Total Preds: 299.0
- Accuracy: 0.3612
- Correct Gen Preds: 90.0
- Gen Accuracy: 0.3010
- Correct Gen Preds 32: 4.0
- Correct Preds 32: 5.0
- Total Labels 32: 64.0
- Accuracy 32: 0.0781
- Gen Accuracy 32: 0.0625
- Correct Gen Preds 33: 29.0
- Correct Preds 33: 36.0
- Total Labels 33: 73.0
- Accuracy 33: 0.4932
- Gen Accuracy 33: 0.3973
- Correct Gen Preds 34: 36.0
- Correct Preds 34: 40.0
- Total Labels 34: 78.0
- Accuracy 34: 0.5128
- Gen Accuracy 34: 0.4615
- Correct Gen Preds 35: 21.0
- Correct Preds 35: 27.0
- Total Labels 35: 83.0
- Accuracy 35: 0.3253
- Gen Accuracy 35: 0.2530
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 1.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.6389 | 0.0058 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.774 | 1.0 | 1 | 1.6389 | 0.0058 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.774 | 2.0 | 2 | 2.2203 | 0.0058 | 957.7460 | 663.8590 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 72.0 | 72.0 | 73.0 | 0.9863 | 0.9863 | 1.0 | 1.0 | 78.0 | 0.0128 | 0.0128 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.4645 | 3.0 | 3 | 1.4500 | 0.0058 | 625.4967 | 433.5612 | 94.0 | 299.0 | 0.3144 | 94.0 | 0.3144 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 17.0 | 17.0 | 73.0 | 0.2329 | 0.2329 | 2.0 | 2.0 | 78.0 | 0.0256 | 0.0256 | 75.0 | 75.0 | 83.0 | 0.9036 | 0.9036 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.0605 | 4.0 | 4 | 1.6302 | 0.0058 | 703.2070 | 487.4260 | 74.0 | 299.0 | 0.2475 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 71.0 | 72.0 | 73.0 | 0.9863 | 0.9726 | 1.0 | 1.0 | 78.0 | 0.0128 | 0.0128 | 1.0 | 1.0 | 83.0 | 0.0120 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.4866 | 5.0 | 5 | 2.1799 | 0.0058 | 940.3344 | 651.7902 | 85.0 | 299.0 | 0.2843 | 73.0 | 0.2441 | 1.0 | 2.0 | 64.0 | 0.0312 | 0.0156 | 52.0 | 63.0 | 73.0 | 0.8630 | 0.7123 | 10.0 | 10.0 | 78.0 | 0.1282 | 0.1282 | 10.0 | 10.0 | 83.0 | 0.1205 | 0.1205 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0795 | 6.0 | 6 | 2.8177 | 0.0058 | 1215.4778 | 842.5050 | 96.0 | 299.0 | 0.3211 | 56.0 | 0.1873 | 4.0 | 14.0 | 64.0 | 0.2188 | 0.0625 | 27.0 | 48.0 | 73.0 | 0.6575 | 0.3699 | 9.0 | 14.0 | 78.0 | 0.1795 | 0.1154 | 16.0 | 20.0 | 83.0 | 0.2410 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0017 | 7.0 | 7 | 4.5963 | 0.0058 | 1982.6942 | 1374.2989 | 107.0 | 299.0 | 0.3579 | 79.0 | 0.2642 | 5.0 | 8.0 | 64.0 | 0.125 | 0.0781 | 27.0 | 42.0 | 73.0 | 0.5753 | 0.3699 | 23.0 | 26.0 | 78.0 | 0.3333 | 0.2949 | 24.0 | 31.0 | 83.0 | 0.3735 | 0.2892 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 8.0 | 8 | 6.2749 | 0.0058 | 2706.7820 | 1876.1983 | 108.0 | 299.0 | 0.3612 | 90.0 | 0.3010 | 4.0 | 5.0 | 64.0 | 0.0781 | 0.0625 | 29.0 | 36.0 | 73.0 | 0.4932 | 0.3973 | 36.0 | 40.0 | 78.0 | 0.5128 | 0.4615 | 21.0 | 27.0 | 83.0 | 0.3253 | 0.2530 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 9.0 | 9 | 7.6088 | 0.0058 | 3282.1736 | 2275.0294 | 102.0 | 299.0 | 0.3411 | 88.0 | 0.2943 | 3.0 | 4.0 | 64.0 | 0.0625 | 0.0469 | 25.0 | 28.0 | 73.0 | 0.3836 | 0.3425 | 42.0 | 47.0 | 78.0 | 0.6026 | 0.5385 | 18.0 | 23.0 | 83.0 | 0.2771 | 0.2169 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 10 | 8.4823 | 0.0058 | 3658.9878 | 2536.2171 | 99.0 | 299.0 | 0.3311 | 90.0 | 0.3010 | 3.0 | 3.0 | 64.0 | 0.0469 | 0.0469 | 26.0 | 28.0 | 73.0 | 0.3836 | 0.3562 | 41.0 | 44.0 | 78.0 | 0.5641 | 0.5256 | 20.0 | 24.0 | 83.0 | 0.2892 | 0.2410 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 11 | 9.0925 | 0.0058 | 3922.2033 | 2718.6642 | 100.0 | 299.0 | 0.3344 | 93.0 | 0.3110 | 3.0 | 3.0 | 64.0 | 0.0469 | 0.0469 | 26.0 | 28.0 | 73.0 | 0.3836 | 0.3562 | 41.0 | 44.0 | 78.0 | 0.5641 | 0.5256 | 23.0 | 25.0 | 83.0 | 0.3012 | 0.2771 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 12 | 9.3124 | 0.0058 | 4017.0339 | 2784.3958 | 97.0 | 299.0 | 0.3244 | 92.0 | 0.3077 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 24.0 | 26.0 | 73.0 | 0.3562 | 0.3288 | 42.0 | 43.0 | 78.0 | 0.5513 | 0.5385 | 25.0 | 27.0 | 83.0 | 0.3253 | 0.3012 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 13 | 9.4349 | 0.0058 | 4069.8925 | 2821.0345 | 100.0 | 299.0 | 0.3344 | 95.0 | 0.3177 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 25.0 | 28.0 | 73.0 | 0.3836 | 0.3425 | 40.0 | 41.0 | 78.0 | 0.5256 | 0.5128 | 29.0 | 30.0 | 83.0 | 0.3614 | 0.3494 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 9.5769 | 0.0058 | 4131.1632 | 2863.5042 | 102.0 | 299.0 | 0.3411 | 96.0 | 0.3211 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 23.0 | 26.0 | 73.0 | 0.3562 | 0.3151 | 40.0 | 41.0 | 78.0 | 0.5256 | 0.5128 | 32.0 | 34.0 | 83.0 | 0.4096 | 0.3855 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 9.7479 | 0.0058 | 4204.9260 | 2914.6326 | 101.0 | 299.0 | 0.3378 | 96.0 | 0.3211 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 22.0 | 25.0 | 73.0 | 0.3425 | 0.3014 | 40.0 | 41.0 | 78.0 | 0.5256 | 0.5128 | 33.0 | 34.0 | 83.0 | 0.4096 | 0.3976 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 9.8237 | 0.0058 | 4237.6167 | 2937.2921 | 101.0 | 299.0 | 0.3378 | 96.0 | 0.3211 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 23.0 | 26.0 | 73.0 | 0.3562 | 0.3151 | 39.0 | 40.0 | 78.0 | 0.5128 | 0.5 | 33.0 | 34.0 | 83.0 | 0.4096 | 0.3976 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 9.8771 | 0.0058 | 4260.6302 | 2953.2438 | 102.0 | 299.0 | 0.3411 | 97.0 | 0.3244 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 22.0 | 25.0 | 73.0 | 0.3425 | 0.3014 | 39.0 | 40.0 | 78.0 | 0.5128 | 0.5 | 35.0 | 36.0 | 83.0 | 0.4337 | 0.4217 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 10.0256 | 0.0058 | 4324.7020 | 2997.6550 | 99.0 | 299.0 | 0.3311 | 94.0 | 0.3144 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 21.0 | 24.0 | 73.0 | 0.3288 | 0.2877 | 38.0 | 39.0 | 78.0 | 0.5 | 0.4872 | 34.0 | 35.0 | 83.0 | 0.4217 | 0.4096 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 9.9950 | 0.0058 | 4311.4824 | 2988.4919 | 99.0 | 299.0 | 0.3311 | 94.0 | 0.3144 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 22.0 | 25.0 | 73.0 | 0.3425 | 0.3014 | 37.0 | 38.0 | 78.0 | 0.4872 | 0.4744 | 34.0 | 35.0 | 83.0 | 0.4217 | 0.4096 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 10.0191 | 0.0058 | 4321.9140 | 2995.7225 | 102.0 | 299.0 | 0.3411 | 97.0 | 0.3244 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 22.0 | 25.0 | 73.0 | 0.3425 | 0.3014 | 39.0 | 40.0 | 78.0 | 0.5128 | 0.5 | 35.0 | 36.0 | 83.0 | 0.4337 | 0.4217 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 10.0464 | 0.0058 | 4333.6653 | 3003.8679 | 101.0 | 299.0 | 0.3378 | 96.0 | 0.3211 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 22.0 | 24.0 | 73.0 | 0.3288 | 0.3014 | 39.0 | 40.0 | 78.0 | 0.5128 | 0.5 | 34.0 | 36.0 | 83.0 | 0.4337 | 0.4096 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 10.0371 | 0.0058 | 4329.6583 | 3001.0905 | 102.0 | 299.0 | 0.3411 | 97.0 | 0.3244 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 23.0 | 25.0 | 73.0 | 0.3425 | 0.3151 | 39.0 | 40.0 | 78.0 | 0.5128 | 0.5 | 34.0 | 36.0 | 83.0 | 0.4337 | 0.4096 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 10.0929 | 0.0058 | 4353.7427 | 3017.7845 | 100.0 | 299.0 | 0.3344 | 95.0 | 0.3177 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 21.0 | 24.0 | 73.0 | 0.3288 | 0.2877 | 38.0 | 39.0 | 78.0 | 0.5 | 0.4872 | 35.0 | 36.0 | 83.0 | 0.4337 | 0.4217 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 10.0993 | 0.0058 | 4356.5032 | 3019.6979 | 101.0 | 299.0 | 0.3378 | 97.0 | 0.3244 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 22.0 | 24.0 | 73.0 | 0.3288 | 0.3014 | 39.0 | 40.0 | 78.0 | 0.5128 | 0.5 | 35.0 | 36.0 | 83.0 | 0.4337 | 0.4217 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 10.0677 | 0.0058 | 4342.8494 | 3010.2338 | 99.0 | 299.0 | 0.3311 | 94.0 | 0.3144 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 21.0 | 24.0 | 73.0 | 0.3288 | 0.2877 | 38.0 | 39.0 | 78.0 | 0.5 | 0.4872 | 34.0 | 35.0 | 83.0 | 0.4217 | 0.4096 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 10.0313 | 0.0058 | 4327.1634 | 2999.3611 | 100.0 | 299.0 | 0.3344 | 95.0 | 0.3177 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 22.0 | 25.0 | 73.0 | 0.3425 | 0.3014 | 38.0 | 39.0 | 78.0 | 0.5 | 0.4872 | 34.0 | 35.0 | 83.0 | 0.4217 | 0.4096 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 10.0884 | 0.0058 | 4351.8004 | 3016.4382 | 97.0 | 299.0 | 0.3244 | 93.0 | 0.3110 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 21.0 | 24.0 | 73.0 | 0.3288 | 0.2877 | 38.0 | 39.0 | 78.0 | 0.5 | 0.4872 | 33.0 | 33.0 | 83.0 | 0.3976 | 0.3976 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 10.0954 | 0.0058 | 4354.7990 | 3018.5167 | 97.0 | 299.0 | 0.3244 | 92.0 | 0.3077 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 21.0 | 24.0 | 73.0 | 0.3288 | 0.2877 | 37.0 | 38.0 | 78.0 | 0.4872 | 0.4744 | 33.0 | 34.0 | 83.0 | 0.4096 | 0.3976 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 10.0557 | 0.0058 | 4337.6871 | 3006.6556 | 98.0 | 299.0 | 0.3278 | 94.0 | 0.3144 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 21.0 | 24.0 | 73.0 | 0.3288 | 0.2877 | 38.0 | 39.0 | 78.0 | 0.5 | 0.4872 | 34.0 | 34.0 | 83.0 | 0.4096 | 0.4096 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 10.0989 | 0.0058 | 4356.3412 | 3019.5856 | 94.0 | 299.0 | 0.3144 | 90.0 | 0.3010 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 22.0 | 24.0 | 73.0 | 0.3288 | 0.3014 | 34.0 | 35.0 | 78.0 | 0.4487 | 0.4359 | 33.0 | 34.0 | 83.0 | 0.4096 | 0.3976 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 10.1199 | 0.0058 | 4365.3782 | 3025.8496 | 98.0 | 299.0 | 0.3278 | 93.0 | 0.3110 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 21.0 | 24.0 | 73.0 | 0.3288 | 0.2877 | 37.0 | 38.0 | 78.0 | 0.4872 | 0.4744 | 34.0 | 35.0 | 83.0 | 0.4217 | 0.4096 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 10.0788 | 0.0058 | 4347.6442 | 3013.5573 | 99.0 | 299.0 | 0.3311 | 95.0 | 0.3177 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 21.0 | 23.0 | 73.0 | 0.3151 | 0.2877 | 37.0 | 38.0 | 78.0 | 0.4872 | 0.4744 | 36.0 | 37.0 | 83.0 | 0.4458 | 0.4337 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 10.0715 | 0.0058 | 4344.5197 | 3011.3916 | 100.0 | 299.0 | 0.3344 | 95.0 | 0.3177 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 22.0 | 25.0 | 73.0 | 0.3425 | 0.3014 | 38.0 | 39.0 | 78.0 | 0.5 | 0.4872 | 34.0 | 35.0 | 83.0 | 0.4217 | 0.4096 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 34.0 | 34 | 10.1057 | 0.0058 | 4359.2507 | 3021.6024 | 95.0 | 299.0 | 0.3177 | 91.0 | 0.3043 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 19.0 | 21.0 | 73.0 | 0.2877 | 0.2603 | 37.0 | 38.0 | 78.0 | 0.4872 | 0.4744 | 34.0 | 35.0 | 83.0 | 0.4217 | 0.4096 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 35.0 | 35 | 10.0934 | 0.0058 | 4353.9602 | 3017.9352 | 96.0 | 299.0 | 0.3211 | 92.0 | 0.3077 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 20.0 | 22.0 | 73.0 | 0.3014 | 0.2740 | 38.0 | 39.0 | 78.0 | 0.5 | 0.4872 | 33.0 | 34.0 | 83.0 | 0.4096 | 0.3976 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 36.0 | 36 | 10.0532 | 0.0058 | 4336.6062 | 3005.9064 | 99.0 | 299.0 | 0.3311 | 94.0 | 0.3144 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 22.0 | 25.0 | 73.0 | 0.3425 | 0.3014 | 38.0 | 39.0 | 78.0 | 0.5 | 0.4872 | 33.0 | 34.0 | 83.0 | 0.4096 | 0.3976 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 37.0 | 37 | 10.0802 | 0.0058 | 4348.2578 | 3013.9826 | 95.0 | 299.0 | 0.3177 | 92.0 | 0.3077 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 21.0 | 23.0 | 73.0 | 0.3151 | 0.2877 | 36.0 | 37.0 | 78.0 | 0.4744 | 0.4615 | 34.0 | 34.0 | 83.0 | 0.4096 | 0.4096 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 38.0 | 38 | 10.0748 | 0.0058 | 4345.9386 | 3012.3751 | 99.0 | 299.0 | 0.3311 | 94.0 | 0.3144 | 1.0 | 1.0 | 64.0 | 0.0156 | 0.0156 | 22.0 | 25.0 | 73.0 | 0.3425 | 0.3014 | 36.0 | 37.0 | 78.0 | 0.4744 | 0.4615 | 35.0 | 36.0 | 83.0 | 0.4337 | 0.4217 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
colabbear/bge-reranker-v2-m3-ko-bnb-4bit
|
colabbear
| 2025-08-19T00:54:32Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"bnb-my-repo",
"text-ranking",
"ko",
"en",
"base_model:dragonkue/bge-reranker-v2-m3-ko",
"base_model:quantized:dragonkue/bge-reranker-v2-m3-ko",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-ranking
| 2025-08-19T00:54:26Z |
---
base_model:
- dragonkue/bge-reranker-v2-m3-ko
license: apache-2.0
language:
- ko
- en
metrics:
- accuracy
pipeline_tag: text-ranking
library_name: sentence-transformers
tags:
- bnb-my-repo
---
# dragonkue/bge-reranker-v2-m3-ko (Quantized)
## Description
This model is a quantized version of the original model [`dragonkue/bge-reranker-v2-m3-ko`](https://huggingface.co/dragonkue/bge-reranker-v2-m3-ko).
It's quantized using the BitsAndBytes library to 4-bit using the [bnb-my-repo](https://huggingface.co/spaces/bnb-community/bnb-my-repo) space.
## Quantization Details
- **Quantization Type**: int4
- **bnb_4bit_quant_type**: nf4
- **bnb_4bit_use_double_quant**: True
- **bnb_4bit_compute_dtype**: bfloat16
- **bnb_4bit_quant_storage**: uint8
# π Original Model Information
<img src="https://cdn-uploads.huggingface.co/production/uploads/642b0c2fecec03b4464a1d9b/IxcqY5qbGNuGpqDciIcOI.webp" width="600">
# Reranker (Cross-Encoder)
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can get a relevance score by inputting query and passage to the reranker. And the score can be mapped to a float value in [0,1] by sigmoid function.
## Model Details
- Base model : BAAI/bge-reranker-v2-m3
- The multilingual model has been optimized for Korean.
## Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('dragonkue/bge-reranker-v2-m3-ko')
tokenizer = AutoTokenizer.from_pretrained('dragonkue/bge-reranker-v2-m3-ko')
features = tokenizer([['λͺ λ
λμ μ§λ°©μΈμΈμμ
λ²μ΄ μνλμκΉ?', 'μ€λ¬΄κ΅μ‘μ ν΅ν΄ βμ§λ°©μΈμΈμμ
λ²βμ λν μμΉλ¨μ²΄μ κ΄μ¬μ μ κ³ νκ³ μμΉλ¨μ²΄μ μ°¨μ§ μλ μ
무 μΆμ§μ μ§μνμλ€. μ΄λ¬ν μ€λΉκ³Όμ μ κ±°μ³ 2014λ
8μ 7μΌλΆν° βμ§λ°©μΈμΈμμ
λ²βμ΄ μνλμλ€.'],
['λͺ λ
λμ μ§λ°©μΈμΈμμ
λ²μ΄ μνλμκΉ?', 'μνμμ½νμμ μ²λ 21μΌ κ΅λ΄ μ μ½κΈ°μ
μ λ°μ΄μ€λ‘μ§μ€κ° κ°λ° μ€μΈ μ μ’
μ½λ‘λλ°μ΄λ¬μ€ κ°μΌμ¦(μ½λ‘λ19) λ°±μ νλ³΄λ¬Όμ§ βμ μ½λ°±-19βμ μμμν κ³νμ μ§λ 20μΌ μΉμΈνλ€κ³ λ°νλ€.']], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
logits = model(**features).logits
scores = torch.sigmoid(logits)
print(scores)
# [9.9997962e-01 5.0702977e-07]
```
## Usage with SentenceTransformers
First install the Sentence Transformers library:
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('dragonkue/bge-reranker-v2-m3-ko', default_activation_function=torch.nn.Sigmoid())
scores = model.predict([['λͺ λ
λμ μ§λ°©μΈμΈμμ
λ²μ΄ μνλμκΉ?', 'μ€λ¬΄κ΅μ‘μ ν΅ν΄ βμ§λ°©μΈμΈμμ
λ²βμ λν μμΉλ¨μ²΄μ κ΄μ¬μ μ κ³ νκ³ μμΉλ¨μ²΄μ μ°¨μ§ μλ μ
무 μΆμ§μ μ§μνμλ€. μ΄λ¬ν μ€λΉκ³Όμ μ κ±°μ³ 2014λ
8μ 7μΌλΆν° βμ§λ°©μΈμΈμμ
λ²βμ΄ μνλμλ€.'],
['λͺ λ
λμ μ§λ°©μΈμΈμμ
λ²μ΄ μνλμκΉ?', 'μνμμ½νμμ μ²λ 21μΌ κ΅λ΄ μ μ½κΈ°μ
μ λ°μ΄μ€λ‘μ§μ€κ° κ°λ° μ€μΈ μ μ’
μ½λ‘λλ°μ΄λ¬μ€ κ°μΌμ¦(μ½λ‘λ19) λ°±μ νλ³΄λ¬Όμ§ βμ μ½λ°±-19βμ μμμν κ³νμ μ§λ 20μΌ μΉμΈνλ€κ³ λ°νλ€.']])
print(scores)
# [9.9997962e-01 5.0702977e-07]
```
## Usage with FlagEmbedding
First install the FlagEmbedding library:
```
pip install -U FlagEmbedding
```
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('dragonkue/bge-reranker-v2-m3-ko')
scores = reranker.compute_score([['λͺ λ
λμ μ§λ°©μΈμΈμμ
λ²μ΄ μνλμκΉ?', 'μ€λ¬΄κ΅μ‘μ ν΅ν΄ βμ§λ°©μΈμΈμμ
λ²βμ λν μμΉλ¨μ²΄μ κ΄μ¬μ μ κ³ νκ³ μμΉλ¨μ²΄μ μ°¨μ§ μλ μ
무 μΆμ§μ μ§μνμλ€. μ΄λ¬ν μ€λΉκ³Όμ μ κ±°μ³ 2014λ
8μ 7μΌλΆν° βμ§λ°©μΈμΈμμ
λ²βμ΄ μνλμλ€.'],
['λͺ λ
λμ μ§λ°©μΈμΈμμ
λ²μ΄ μνλμκΉ?', 'μνμμ½νμμ μ²λ 21μΌ κ΅λ΄ μ μ½κΈ°μ
μ λ°μ΄μ€λ‘μ§μ€κ° κ°λ° μ€μΈ μ μ’
μ½λ‘λλ°μ΄λ¬μ€ κ°μΌμ¦(μ½λ‘λ19) λ°±μ νλ³΄λ¬Όμ§ βμ μ½λ°±-19βμ μμμν κ³νμ μ§λ 20μΌ μΉμΈνλ€κ³ λ°νλ€.']], normalize=True)
print(scores)
# [9.9997962e-01 5.0702977e-07]
```
## Fine-tune
Refer to https://github.com/FlagOpen/FlagEmbedding
## Evaluation
### Bi-encoder and Cross-encoder
Bi-Encoders convert texts into fixed-size vectors and efficiently calculate similarities between them. They are fast and ideal for tasks like semantic search and classification, making them suitable for processing large datasets quickly.
Cross-Encoders directly compare pairs of texts to compute similarity scores, providing more accurate results. While they are slower due to needing to process each pair, they excel in re-ranking top results and are important in Advanced RAG techniques for enhancing text generation.
### Korean Embedding Benchmark with AutoRAG
(https://github.com/Marker-Inc-Korea/AutoRAG-example-korean-embedding-benchmark)
This is a Korean embedding benchmark for the financial sector.
**Top-k 1**
Bi-Encoder (Sentence Transformer)
| Model name | F1 | Recall | Precision |
|---------------------------------------|------------|------------|------------|
| paraphrase-multilingual-mpnet-base-v2 | 0.3596 | 0.3596 | 0.3596 |
| KoSimCSE-roberta | 0.4298 | 0.4298 | 0.4298 |
| Cohere embed-multilingual-v3.0 | 0.3596 | 0.3596 | 0.3596 |
| openai ada 002 | 0.4737 | 0.4737 | 0.4737 |
| multilingual-e5-large-instruct | 0.4649 | 0.4649 | 0.4649 |
| Upstage Embedding | 0.6579 | 0.6579 | 0.6579 |
| paraphrase-multilingual-MiniLM-L12-v2 | 0.2982 | 0.2982 | 0.2982 |
| openai_embed_3_small | 0.5439 | 0.5439 | 0.5439 |
| ko-sroberta-multitask | 0.4211 | 0.4211 | 0.4211 |
| openai_embed_3_large | 0.6053 | 0.6053 | 0.6053 |
| KU-HIAI-ONTHEIT-large-v1 | 0.7105 | 0.7105 | 0.7105 |
| KU-HIAI-ONTHEIT-large-v1.1 | 0.7193 | 0.7193 | 0.7193 |
| kf-deberta-multitask | 0.4561 | 0.4561 | 0.4561 |
| gte-multilingual-base | 0.5877 | 0.5877 | 0.5877 |
| KoE5 | 0.7018 | 0.7018 | 0.7018 |
| BGE-m3 | 0.6578 | 0.6578 | 0.6578 |
| bge-m3-korean | 0.5351 | 0.5351 | 0.5351 |
| **BGE-m3-ko** | **0.7456** | **0.7456** | **0.7456** |
Cross-Encoder (Reranker)
| Model name | F1 | Recall | Precision |
|---------------------------------------|------------|------------|------------|
| gte-multilingual-reranker-base | 0.7281 | 0.7281 | 0.7281 |
| jina-reranker-v2-base-multilingual | 0.8070 | 0.8070 | 0.8070 |
| bge-reranker-v2-m3 | 0.8772 | 0.8772 | 0.8772 |
| upskyy/ko-reranker-8k | 0.8684| 0.8684 | 0.8684 |
| upskyy/ko-reranker | 0.8333| 0.8333 | 0.8333 |
| mncai/bge-ko-reranker-560M | 0.0088| 0.0088 | 0.0088 |
| Dongjin-kr/ko-reranker | 0.8509| 0.8509 | 0.8509 |
| **bge-reranker-v2-m3-ko** | **0.9123** | **0.9123** | **0.9123** |
**Top-k 3**
Bi-Encoder (Sentence Transformer)
| Model name | F1 | Recall | Precision |
|---------------------------------------|------------|------------|------------|
| paraphrase-multilingual-mpnet-base-v2 | 0.2368 | 0.4737 | 0.1579 |
| KoSimCSE-roberta | 0.3026 | 0.6053 | 0.2018 |
| Cohere embed-multilingual-v3.0 | 0.2851 | 0.5702 | 0.1901 |
| openai ada 002 | 0.3553 | 0.7105 | 0.2368 |
| multilingual-e5-large-instruct | 0.3333 | 0.6667 | 0.2222 |
| Upstage Embedding | 0.4211 | 0.8421 | 0.2807 |
| paraphrase-multilingual-MiniLM-L12-v2 | 0.2061 | 0.4123 | 0.1374 |
| openai_embed_3_small | 0.3640 | 0.7281 | 0.2427 |
| ko-sroberta-multitask | 0.2939 | 0.5877 | 0.1959 |
| openai_embed_3_large | 0.3947 | 0.7895 | 0.2632 |
| KU-HIAI-ONTHEIT-large-v1 | 0.4386 | 0.8772 | 0.2924 |
| KU-HIAI-ONTHEIT-large-v1.1 | 0.4430 | 0.8860 | 0.2953 |
| kf-deberta-multitask | 0.3158 | 0.6316 | 0.2105 |
| gte-multilingual-base | 0.4035 | 0.8070 | 0.2690 |
| KoE5 | 0.4254 | 0.8509 | 0.2836 |
| BGE-m3 | 0.4254 | 0.8508 | 0.2836 |
| bge-m3-korean | 0.3684 | 0.7368 | 0.2456 |
| **BGE-m3-ko** | **0.4517** | **0.9035** | **0.3011** |
Cross-Encoder (Reranker)
| Model name | F1 | Recall | Precision |
|---------------------------------------|------------|------------|------------|
| gte-multilingual-reranker-base | 0.4605 | 0.9211 | 0.3070 |
| jina-reranker-v2-base-multilingual | 0.4649 | 0.9298 | 0.3099 |
| bge-reranker-v2-m3 | 0.4781 | 0.9561 | 0.3187 |
| upskyy/ko-reranker-8k | 0.4781| 0.9561 | 0.3187 |
| upskyy/ko-reranker | 0.4649| 0.9298 | 0.3099 |
| mncai/bge-ko-reranker-560M | 0.0044| 0.0088 | 0.0029 |
| Dongjin-kr/ko-reranker | 0.4737| 0.9474 | 0.3158 |
| **bge-reranker-v2-m3-ko** | **0.4825** | **0.9649** | **0.3216** |
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1755564590
|
liukevin666
| 2025-08-19T00:51:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:51:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
remember2015/test
|
remember2015
| 2025-08-19T00:50:13Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T00:49:39Z |
---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bimabk/6134e552-a4f8-40d3-9cfe-c1f6b4388f3a
|
bimabk
| 2025-08-19T00:49:46Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"region:us"
] | null | 2025-08-19T00:49:36Z |
---
base_model: Qwen/Qwen2.5-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755562894
|
hakimjustbao
| 2025-08-19T00:48:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:48:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_finnish_immigration
|
AnonymousCS
| 2025-08-19T00:48:10Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-18T23:51:34Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_finnish_immigration
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_finnish_immigration
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2720
- Accuracy: 0.9077
- 1-f1: 0.85
- 1-recall: 0.7907
- 1-precision: 0.9189
- Balanced Acc: 0.8781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.3183 | 1.0 | 5 | 0.2567 | 0.9462 | 0.9136 | 0.8605 | 0.9737 | 0.9245 |
| 0.1427 | 2.0 | 10 | 0.2406 | 0.9231 | 0.8780 | 0.8372 | 0.9231 | 0.9014 |
| 0.2205 | 3.0 | 15 | 0.2658 | 0.8923 | 0.8409 | 0.8605 | 0.8222 | 0.8843 |
| 0.0792 | 4.0 | 20 | 0.2259 | 0.9154 | 0.8642 | 0.8140 | 0.9211 | 0.8897 |
| 0.1465 | 5.0 | 25 | 0.2607 | 0.9 | 0.8539 | 0.8837 | 0.8261 | 0.8959 |
| 0.1121 | 6.0 | 30 | 0.2720 | 0.9077 | 0.85 | 0.7907 | 0.9189 | 0.8781 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
chainway9/blockassist-bc-untamed_quick_eel_1755562787
|
chainway9
| 2025-08-19T00:47:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:47:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tensorblock/Qwen_Qwen3-30B-A3B-Thinking-2507-GGUF
|
tensorblock
| 2025-08-19T00:45:09Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"base_model:Qwen/Qwen3-30B-A3B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-30B-A3B-Thinking-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-18T19:16:31Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-30B-A3B-Thinking-2507
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Qwen/Qwen3-30B-A3B-Thinking-2507 - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building β
</a>
</div>
This repo contains GGUF format model files for [Qwen/Qwen3-30B-A3B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π Try it now! π</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
<think>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Qwen3-30B-A3B-Thinking-2507-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen_Qwen3-30B-A3B-Thinking-2507-GGUF/blob/main/Qwen3-30B-A3B-Thinking-2507-Q2_K.gguf) | Q2_K | 11.259 GB | smallest, significant quality loss - not recommended for most purposes |
| [Qwen3-30B-A3B-Thinking-2507-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen_Qwen3-30B-A3B-Thinking-2507-GGUF/blob/main/Qwen3-30B-A3B-Thinking-2507-Q3_K_S.gguf) | Q3_K_S | 13.292 GB | very small, high quality loss |
| [Qwen3-30B-A3B-Thinking-2507-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen_Qwen3-30B-A3B-Thinking-2507-GGUF/blob/main/Qwen3-30B-A3B-Thinking-2507-Q3_K_M.gguf) | Q3_K_M | 14.712 GB | very small, high quality loss |
| [Qwen3-30B-A3B-Thinking-2507-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen_Qwen3-30B-A3B-Thinking-2507-GGUF/blob/main/Qwen3-30B-A3B-Thinking-2507-Q3_K_L.gguf) | Q3_K_L | 15.901 GB | small, substantial quality loss |
| [Qwen3-30B-A3B-Thinking-2507-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen_Qwen3-30B-A3B-Thinking-2507-GGUF/blob/main/Qwen3-30B-A3B-Thinking-2507-Q4_0.gguf) | Q4_0 | 17.304 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Qwen3-30B-A3B-Thinking-2507-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen_Qwen3-30B-A3B-Thinking-2507-GGUF/blob/main/Qwen3-30B-A3B-Thinking-2507-Q4_K_S.gguf) | Q4_K_S | 17.456 GB | small, greater quality loss |
| [Qwen3-30B-A3B-Thinking-2507-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen_Qwen3-30B-A3B-Thinking-2507-GGUF/blob/main/Qwen3-30B-A3B-Thinking-2507-Q4_K_M.gguf) | Q4_K_M | 18.557 GB | medium, balanced quality - recommended |
| [Qwen3-30B-A3B-Thinking-2507-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen_Qwen3-30B-A3B-Thinking-2507-GGUF/blob/main/Qwen3-30B-A3B-Thinking-2507-Q5_0.gguf) | Q5_0 | 21.081 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Qwen3-30B-A3B-Thinking-2507-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen_Qwen3-30B-A3B-Thinking-2507-GGUF/blob/main/Qwen3-30B-A3B-Thinking-2507-Q5_K_S.gguf) | Q5_K_S | 21.081 GB | large, low quality loss - recommended |
| [Qwen3-30B-A3B-Thinking-2507-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen_Qwen3-30B-A3B-Thinking-2507-GGUF/blob/main/Qwen3-30B-A3B-Thinking-2507-Q5_K_M.gguf) | Q5_K_M | 21.726 GB | large, very low quality loss - recommended |
| [Qwen3-30B-A3B-Thinking-2507-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen_Qwen3-30B-A3B-Thinking-2507-GGUF/blob/main/Qwen3-30B-A3B-Thinking-2507-Q6_K.gguf) | Q6_K | 25.093 GB | very large, extremely low quality loss |
| [Qwen3-30B-A3B-Thinking-2507-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen_Qwen3-30B-A3B-Thinking-2507-GGUF/blob/main/Qwen3-30B-A3B-Thinking-2507-Q8_0.gguf) | Q8_0 | 32.484 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Qwen_Qwen3-30B-A3B-Thinking-2507-GGUF --include "Qwen3-30B-A3B-Thinking-2507-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Qwen_Qwen3-30B-A3B-Thinking-2507-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755564067
|
IvanJAjebu
| 2025-08-19T00:42:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:42:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vkamenski/smolvla-stacking-blocks
|
vkamenski
| 2025-08-19T00:41:46Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:vkamenski/stacking-blocks-v5",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-19T00:41:24Z |
---
base_model: lerobot/smolvla_base
datasets: vkamenski/stacking-blocks-v5
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- robotics
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
eshaaftab900/EN_DeepSeek-R1-Distill-Llama-8B-ft-QRCD-and-Quran-lora-adapters-2
|
eshaaftab900
| 2025-08-19T00:41:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"region:us"
] |
text-generation
| 2025-08-19T00:40:59Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
TAUR-dev/M-test-rl
|
TAUR-dev
| 2025-08-19T00:40:51Z | 3 | 0 | null |
[
"safetensors",
"qwen2",
"en",
"license:mit",
"region:us"
] | null | 2025-08-14T09:22:44Z |
---
language: en
license: mit
---
# M-test-rl
## Model Details
- **Training Method**: VeRL Reinforcement Learning (RL)
- **Stage Name**: rl
- **Experiment**: test
- **RL Framework**: VeRL (Versatile Reinforcement Learning)
## Training Configuration
## Experiment Tracking
π **View complete experiment details**: https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__test__v1
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-test-rl")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-test-rl")
```
|
AnonymousCS/xlmr_norwegian_immigration
|
AnonymousCS
| 2025-08-19T00:40:29Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T00:24:57Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_norwegian_immigration
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_norwegian_immigration
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3010
- Accuracy: 0.9154
- 1-f1: 0.8571
- 1-recall: 0.7674
- 1-precision: 0.9706
- Balanced Acc: 0.8780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.5211 | 1.0 | 5 | 0.5227 | 0.8692 | 0.7792 | 0.6977 | 0.8824 | 0.8258 |
| 0.3017 | 2.0 | 10 | 0.4052 | 0.8692 | 0.7536 | 0.6047 | 1.0 | 0.8023 |
| 0.2832 | 3.0 | 15 | 0.3774 | 0.8462 | 0.7727 | 0.7907 | 0.7556 | 0.8321 |
| 0.1558 | 4.0 | 20 | 0.3497 | 0.9 | 0.8219 | 0.6977 | 1.0 | 0.8488 |
| 0.2806 | 5.0 | 25 | 0.3573 | 0.9 | 0.8219 | 0.6977 | 1.0 | 0.8488 |
| 0.1661 | 6.0 | 30 | 0.3139 | 0.8692 | 0.8046 | 0.8140 | 0.7955 | 0.8553 |
| 0.172 | 7.0 | 35 | 0.2988 | 0.8923 | 0.8293 | 0.7907 | 0.8718 | 0.8666 |
| 0.1172 | 8.0 | 40 | 0.3699 | 0.9077 | 0.8378 | 0.7209 | 1.0 | 0.8605 |
| 0.1188 | 9.0 | 45 | 0.2824 | 0.8846 | 0.8148 | 0.7674 | 0.8684 | 0.8550 |
| 0.0532 | 10.0 | 50 | 0.2838 | 0.9 | 0.8354 | 0.7674 | 0.9167 | 0.8665 |
| 0.0942 | 11.0 | 55 | 0.3010 | 0.9154 | 0.8571 | 0.7674 | 0.9706 | 0.8780 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755562539
|
sampingkaca72
| 2025-08-19T00:40:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:40:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755562351
|
lisaozill03
| 2025-08-19T00:37:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:37:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dashashiya/blockassist-bc-arctic_agile_tarantula_1755563597
|
dashashiya
| 2025-08-19T00:36:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic agile tarantula",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:36:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic agile tarantula
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755563652
|
IvanJAjebu
| 2025-08-19T00:35:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:35:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Noora68/lpr-0.4B
|
Noora68
| 2025-08-19T00:35:49Z | 0 | 0 | null |
[
"safetensors",
"lpr",
"biology",
"protein",
"protein classification",
"lipid binding",
"lipid binding site",
"recognition",
"en",
"base_model:EvolutionaryScale/esmc-300m-2024-12",
"base_model:finetune:EvolutionaryScale/esmc-300m-2024-12",
"license:mit",
"region:us"
] | null | 2025-08-17T03:06:41Z |
---
license: mit
language:
- en
base_model:
- EvolutionaryScale/esmc-300m-2024-12
- google-bert/bert-base-uncased
new_version: Noora68/lpr-0.4B
tags:
- biology
- protein
- protein classification
- lipid binding
- lipid binding site
- recognition
---
---
# Lipid-Protein Recognition (LPR)
we present a robust prediction tool termed Lipid-Protein Recognition (LPR)
for predicting the lipid categories that interact with proteins, utilizing
protein sequences as the only input. Using a combined model architecture by
the fusion of ESM C and BERT models, our method enables accurate and
interpretable prediction to distinguish lipid-binding signature among
the 8 major lipid categories defined by LIPID MAPS.
LPR will serve as a powerful tool to facilitate the exploration of
lipid-binding specificity and rational protein design.
---
- **Paper**: [https://...](https://....)
- **GitHub Repository**: [https://github.com/Noora68/Lipid-binding-Protein-Recognition-LPR](https://github.com/Noora68/Lipid-binding-Protein-Recognition-LPR)
- **Online Demo**: [https://colab/](https://colab/)
---
## Model Details
- **Architecture**: ESM Cambrian + BERT + classification head
- **Task**: Multi-label protein-lipid binding prediction
- **Fine-tuned from**: `ESMC_300m` + `bert-base-uncased`
- **Developed by**: Noora68
- **Framework**: PyTorch + HuggingFace Transformers
---
**Model usage workflow:**
1. Load the model and tokenizer
2. Process the input sequence (tokenize β batch β pad β mask)
3. Run inference to obtain logits β probabilities
4. Output the results and mark high-confidence categories
---
## install the latest versionοΌ
```python
pip install lpr_model==1.1.1
````
---
## Usage:
```python
from lpr_model import LPR
import torch
from torch.nn.utils.rnn import pad_sequence
from esm.tokenization import EsmSequenceTokenizer
# Set device (GPU if available, otherwise CPU)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = EsmSequenceTokenizer()
# Default lipid type dictionary
default_dict = {
"0": "NotLipidType",
"1": "Fatty Acyl (FA)",
"2": "Prenol Lipid (PR)",
"3": "Glycerophospholipid (GP)",
"4": "Sterol Lipid (ST)",
"5": "Polyketide (PK)",
"6": "Glycerolipid (GL)",
"7": "Sphingolipid (SP)",
"8": "Saccharolipid (SL)"
}
# Load pretrained LPR model
model = LPR.from_pretrained("Noora68/lpr-0.4B").to(device)
# Example protein sequence
sequence = "MDSNFLKYLSTAPVLFTVWLSFTASFIIEANRFFPDMLYFPM"
# Tokenize the sequence -> input_ids
input_ids = torch.tensor(tokenizer.encode(sequence))
# Add batch dimension: (batch_size=1, length)
input_ids = input_ids.unsqueeze(0)
# Pad to the longest sequence in the batch
input_ids_padded = pad_sequence(input_ids, batch_first=True, padding_value=tokenizer.pad_token_id)
# Build attention mask: 1 for real tokens, 0 for padding
attention_mask = (input_ids_padded != tokenizer.pad_token_id).long()
# Move tensors to the same device as model
input_ids_padded = input_ids_padded.to(device)
attention_mask = attention_mask.to(device)
# Forward pass (no gradient needed during inference)
with torch.no_grad():
outputs = model(input_ids_padded, attention_mask)
# Convert logits to probabilities using sigmoid
probs = torch.sigmoid(outputs['logits'])
# Convert to CPU and numpy array
probs = probs.squeeze().detach().cpu().numpy()
# Print results: add a check mark if probability > 0.6
for i, p in enumerate(probs):
mark = " β" if p > 0.6 else ""
print(f"{default_dict[str(i)]:<25}: {p:.4f}{mark}")
````
## output of the above example is:
```
NotLipidType : 0.0007
Fatty Acyl (FA) : 0.1092
Prenol Lipid (PR) : 0.9178 β
Glycerophospholipid (GP) : 0.6059 β
Sterol Lipid (ST) : 0.0083
Polyketide (PK) : 0.0026
Glycerolipid (GL) : 0.0771
Sphingolipid (SP) : 0.0002
Saccharolipid (SL) : 0.0000
```
---
## Limitations
* Trained only on lipid-binding protein data and may not generalize to other functions.
* Model performance is best with sequence lengths under 500.
* Dataset size is limited compared to large-scale protein corpora.
* Model may reflect biases present in training data (e.g., under-representation of certain lipid types).
---
## Citation
If you use this model, please cite:
```bibtex
@article{your2025paper,
title={Deciphering the code of lipid binding by large language model},
author={Feitong Dong,},
journal={Bioinformatics},
year={2025}
}
```
---
## License
MIT License
---
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755562155
|
quantumxnode
| 2025-08-19T00:35:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:35:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1755562109
|
koloni
| 2025-08-19T00:34:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:34:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
donoway/BoolQ_Llama-3.2-1B-131yj8sj
|
donoway
| 2025-08-19T00:32:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T23:26:17Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BoolQ_Llama-3.2-1B-131yj8sj
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BoolQ_Llama-3.2-1B-131yj8sj
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4452
- Model Preparation Time: 0.0057
- Mdl: 6818.1174
- Accumulated Loss: 4725.9588
- Correct Preds: 2702.0
- Total Preds: 3270.0
- Accuracy: 0.8263
- Correct Gen Preds: 2701.0
- Gen Accuracy: 0.8260
- Correct Gen Preds 9642: 1791.0
- Correct Preds 9642: 1798.0
- Total Labels 9642: 2026.0
- Accuracy 9642: 0.8875
- Gen Accuracy 9642: 0.8840
- Correct Gen Preds 2822: 901.0
- Correct Preds 2822: 904.0
- Total Labels 2822: 1231.0
- Accuracy 2822: 0.7344
- Gen Accuracy 2822: 0.7319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 120
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 9642 | Correct Preds 9642 | Total Labels 9642 | Accuracy 9642 | Gen Accuracy 9642 | Correct Gen Preds 2822 | Correct Preds 2822 | Total Labels 2822 | Accuracy 2822 | Gen Accuracy 2822 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:----------------------:|:------------------:|:-----------------:|:-------------:|:-----------------:|:----------------------:|:------------------:|:-----------------:|:-------------:|:-----------------:|
| No log | 0 | 0 | 0.7080 | 0.0057 | 3339.8933 | 2315.0376 | 2032.0 | 3270.0 | 0.6214 | 2040.0 | 0.6239 | 2007.0 | 2008.0 | 2026.0 | 0.9911 | 0.9906 | 24.0 | 24.0 | 1231.0 | 0.0195 | 0.0195 |
| 0.2476 | 1.0 | 143 | 0.4988 | 0.0057 | 2353.0385 | 1631.0020 | 2591.0 | 3270.0 | 0.7924 | 2599.0 | 0.7948 | 1843.0 | 1843.0 | 2026.0 | 0.9097 | 0.9097 | 747.0 | 748.0 | 1231.0 | 0.6076 | 0.6068 |
| 0.0885 | 2.0 | 286 | 0.5426 | 0.0057 | 2559.9190 | 1774.4006 | 2626.0 | 3270.0 | 0.8031 | 2626.0 | 0.8031 | 1900.0 | 1906.0 | 2026.0 | 0.9408 | 0.9378 | 717.0 | 720.0 | 1231.0 | 0.5849 | 0.5825 |
| 0.0086 | 3.0 | 429 | 0.7471 | 0.0057 | 3524.5342 | 2443.0209 | 2655.0 | 3270.0 | 0.8119 | 2625.0 | 0.8028 | 1638.0 | 1667.0 | 2026.0 | 0.8228 | 0.8085 | 978.0 | 988.0 | 1231.0 | 0.8026 | 0.7945 |
| 0.0002 | 4.0 | 572 | 1.1866 | 0.0057 | 5597.8044 | 3880.1023 | 2662.0 | 3270.0 | 0.8141 | 2663.0 | 0.8144 | 1703.0 | 1707.0 | 2026.0 | 0.8425 | 0.8406 | 953.0 | 955.0 | 1231.0 | 0.7758 | 0.7742 |
| 0.0115 | 5.0 | 715 | 1.3058 | 0.0057 | 6160.2400 | 4269.9530 | 2673.0 | 3270.0 | 0.8174 | 2664.0 | 0.8147 | 1791.0 | 1797.0 | 2026.0 | 0.8870 | 0.8840 | 864.0 | 876.0 | 1231.0 | 0.7116 | 0.7019 |
| 0.0 | 6.0 | 858 | 1.4452 | 0.0057 | 6818.1174 | 4725.9588 | 2702.0 | 3270.0 | 0.8263 | 2701.0 | 0.8260 | 1791.0 | 1798.0 | 2026.0 | 0.8875 | 0.8840 | 901.0 | 904.0 | 1231.0 | 0.7344 | 0.7319 |
| 0.0 | 7.0 | 1001 | 1.4433 | 0.0057 | 6808.9128 | 4719.5787 | 2698.0 | 3270.0 | 0.8251 | 2704.0 | 0.8269 | 1812.0 | 1814.0 | 2026.0 | 0.8954 | 0.8944 | 883.0 | 884.0 | 1231.0 | 0.7181 | 0.7173 |
| 0.0 | 8.0 | 1144 | 1.3856 | 0.0057 | 6536.7240 | 4530.9118 | 2691.0 | 3270.0 | 0.8229 | 2694.0 | 0.8239 | 1768.0 | 1772.0 | 2026.0 | 0.8746 | 0.8727 | 917.0 | 919.0 | 1231.0 | 0.7465 | 0.7449 |
| 0.9802 | 9.0 | 1287 | 1.4773 | 0.0057 | 6969.2721 | 4830.7313 | 2692.0 | 3270.0 | 0.8232 | 2698.0 | 0.8251 | 1793.0 | 1795.0 | 2026.0 | 0.8860 | 0.8850 | 897.0 | 897.0 | 1231.0 | 0.7287 | 0.7287 |
| 0.0 | 10.0 | 1430 | 1.5437 | 0.0057 | 7282.6372 | 5047.9395 | 2695.0 | 3270.0 | 0.8242 | 2701.0 | 0.8260 | 1775.0 | 1777.0 | 2026.0 | 0.8771 | 0.8761 | 917.0 | 918.0 | 1231.0 | 0.7457 | 0.7449 |
| 0.0 | 11.0 | 1573 | 1.5490 | 0.0057 | 7307.5108 | 5065.1805 | 2690.0 | 3270.0 | 0.8226 | 2696.0 | 0.8245 | 1771.0 | 1773.0 | 2026.0 | 0.8751 | 0.8741 | 916.0 | 917.0 | 1231.0 | 0.7449 | 0.7441 |
| 0.0 | 12.0 | 1716 | 1.5529 | 0.0057 | 7325.9736 | 5077.9779 | 2692.0 | 3270.0 | 0.8232 | 2697.0 | 0.8248 | 1773.0 | 1775.0 | 2026.0 | 0.8761 | 0.8751 | 916.0 | 917.0 | 1231.0 | 0.7449 | 0.7441 |
| 0.0 | 13.0 | 1859 | 1.5565 | 0.0057 | 7343.1664 | 5089.8951 | 2691.0 | 3270.0 | 0.8229 | 2696.0 | 0.8245 | 1771.0 | 1773.0 | 2026.0 | 0.8751 | 0.8741 | 917.0 | 918.0 | 1231.0 | 0.7457 | 0.7449 |
| 0.0 | 14.0 | 2002 | 1.5552 | 0.0057 | 7336.7036 | 5085.4154 | 2692.0 | 3270.0 | 0.8232 | 2697.0 | 0.8248 | 1772.0 | 1774.0 | 2026.0 | 0.8756 | 0.8746 | 917.0 | 918.0 | 1231.0 | 0.7457 | 0.7449 |
| 0.9802 | 15.0 | 2145 | 1.5579 | 0.0057 | 7349.6490 | 5094.3885 | 2695.0 | 3270.0 | 0.8242 | 2700.0 | 0.8257 | 1774.0 | 1776.0 | 2026.0 | 0.8766 | 0.8756 | 918.0 | 919.0 | 1231.0 | 0.7465 | 0.7457 |
| 0.0 | 16.0 | 2288 | 1.5570 | 0.0057 | 7345.2574 | 5091.3444 | 2689.0 | 3270.0 | 0.8223 | 2694.0 | 0.8239 | 1770.0 | 1772.0 | 2026.0 | 0.8746 | 0.8736 | 916.0 | 917.0 | 1231.0 | 0.7449 | 0.7441 |
| 0.0 | 17.0 | 2431 | 1.5594 | 0.0057 | 7356.5874 | 5099.1978 | 2693.0 | 3270.0 | 0.8235 | 2699.0 | 0.8254 | 1772.0 | 1774.0 | 2026.0 | 0.8756 | 0.8746 | 918.0 | 919.0 | 1231.0 | 0.7465 | 0.7457 |
| 0.0 | 18.0 | 2574 | 1.5588 | 0.0057 | 7354.0051 | 5097.4079 | 2693.0 | 3270.0 | 0.8235 | 2699.0 | 0.8254 | 1773.0 | 1775.0 | 2026.0 | 0.8761 | 0.8751 | 917.0 | 918.0 | 1231.0 | 0.7457 | 0.7449 |
| 0.0 | 19.0 | 2717 | 1.5574 | 0.0057 | 7347.1134 | 5092.6310 | 2694.0 | 3270.0 | 0.8239 | 2700.0 | 0.8257 | 1775.0 | 1777.0 | 2026.0 | 0.8771 | 0.8761 | 916.0 | 917.0 | 1231.0 | 0.7449 | 0.7441 |
| 0.0 | 20.0 | 2860 | 1.5598 | 0.0057 | 7358.7582 | 5100.7025 | 2694.0 | 3270.0 | 0.8239 | 2699.0 | 0.8254 | 1776.0 | 1778.0 | 2026.0 | 0.8776 | 0.8766 | 915.0 | 916.0 | 1231.0 | 0.7441 | 0.7433 |
| 0.0 | 21.0 | 3003 | 1.5610 | 0.0057 | 7364.2419 | 5104.5035 | 2693.0 | 3270.0 | 0.8235 | 2699.0 | 0.8254 | 1773.0 | 1775.0 | 2026.0 | 0.8761 | 0.8751 | 917.0 | 918.0 | 1231.0 | 0.7457 | 0.7449 |
| 0.0 | 22.0 | 3146 | 1.5590 | 0.0057 | 7354.8963 | 5098.0257 | 2695.0 | 3270.0 | 0.8242 | 2700.0 | 0.8257 | 1775.0 | 1777.0 | 2026.0 | 0.8771 | 0.8761 | 917.0 | 918.0 | 1231.0 | 0.7457 | 0.7449 |
| 0.0 | 23.0 | 3289 | 1.5609 | 0.0057 | 7363.6331 | 5104.0815 | 2692.0 | 3270.0 | 0.8232 | 2698.0 | 0.8251 | 1773.0 | 1775.0 | 2026.0 | 0.8761 | 0.8751 | 916.0 | 917.0 | 1231.0 | 0.7449 | 0.7441 |
| 0.0 | 24.0 | 3432 | 1.5620 | 0.0057 | 7368.7476 | 5107.6266 | 2694.0 | 3270.0 | 0.8239 | 2699.0 | 0.8254 | 1775.0 | 1777.0 | 2026.0 | 0.8771 | 0.8761 | 916.0 | 917.0 | 1231.0 | 0.7449 | 0.7441 |
| 0.0 | 25.0 | 3575 | 1.5613 | 0.0057 | 7365.4606 | 5105.3482 | 2693.0 | 3270.0 | 0.8235 | 2699.0 | 0.8254 | 1774.0 | 1776.0 | 2026.0 | 0.8766 | 0.8756 | 916.0 | 917.0 | 1231.0 | 0.7449 | 0.7441 |
| 0.0 | 26.0 | 3718 | 1.5604 | 0.0057 | 7361.4952 | 5102.5996 | 2692.0 | 3270.0 | 0.8232 | 2697.0 | 0.8248 | 1773.0 | 1775.0 | 2026.0 | 0.8761 | 0.8751 | 916.0 | 917.0 | 1231.0 | 0.7449 | 0.7441 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755561856
|
helmutsukocok
| 2025-08-19T00:31:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:31:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
opooladz/llama3-8b-1bit-quantized
|
opooladz
| 2025-08-19T00:31:04Z | 0 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-08-19T00:13:26Z |
# 1-Bit Quantized Llama 3 8B
This is a 1-bit quantized version of meta-llama/Meta-Llama-3-8B.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("opooladz/llama3-8b-1bit-quantized")
tokenizer = AutoTokenizer.from_pretrained("opooladz/llama3-8b-1bit-quantized")
# Use the model for inference
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs, max_length=50)
print(tokenizer.decode(outputs[0]))
```
## Original Model
The original model can be found at: [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
## Quantization Details
Layers that were NOT quantized (kept in original precision):
- ONLY normalization layers (LayerNorm, RMSNorm, etc.)
Layers that WERE quantized to 1-bit:
- β
All embedding layers
- β
All weight matrices (attention, MLP)
- β
All bias parameters
- β
Output projection layers
- β
Everything except normalization layers
This aggressive quantization reduces ~99% of parameters to just 2 values while keeping only the critical normalization layers intact.
|
dashawn888/MyGemmaNPC
|
dashawn888
| 2025-08-19T00:29:30Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T00:25:41Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dashawn888/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755561790
|
kojeklollipop
| 2025-08-19T00:29:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:29:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Team-Atom/act_record_pp_red001_64_100000
|
Team-Atom
| 2025-08-19T00:26:07Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Team-Atom/PiPl_red_001",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-19T00:25:53Z |
---
datasets: Team-Atom/PiPl_red_001
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- act
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
donoway/ARC-Challenge_Llama-3.2-1B-rx87l0zg
|
donoway
| 2025-08-19T00:24:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T00:13:31Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Challenge_Llama-3.2-1B-rx87l0zg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Challenge_Llama-3.2-1B-rx87l0zg
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7104
- Model Preparation Time: 0.0073
- Mdl: 2031.9081
- Accumulated Loss: 1408.4113
- Correct Preds: 102.0
- Total Preds: 299.0
- Accuracy: 0.3411
- Correct Gen Preds: 62.0
- Gen Accuracy: 0.2074
- Correct Gen Preds 32: 7.0
- Correct Preds 32: 18.0
- Total Labels 32: 64.0
- Accuracy 32: 0.2812
- Gen Accuracy 32: 0.1094
- Correct Gen Preds 33: 27.0
- Correct Preds 33: 46.0
- Total Labels 33: 73.0
- Accuracy 33: 0.6301
- Gen Accuracy 33: 0.3699
- Correct Gen Preds 34: 19.0
- Correct Preds 34: 27.0
- Total Labels 34: 78.0
- Accuracy 34: 0.3462
- Gen Accuracy 34: 0.2436
- Correct Gen Preds 35: 9.0
- Correct Preds 35: 11.0
- Total Labels 35: 83.0
- Accuracy 35: 0.1325
- Gen Accuracy 35: 0.1084
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 1.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.6389 | 0.0073 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.6964 | 1.0 | 1 | 1.6389 | 0.0073 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.6964 | 2.0 | 2 | 2.1206 | 0.0073 | 914.7418 | 634.0507 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 72.0 | 72.0 | 73.0 | 0.9863 | 0.9863 | 1.0 | 1.0 | 78.0 | 0.0128 | 0.0128 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.8796 | 3.0 | 3 | 1.3938 | 0.0073 | 601.2525 | 416.7565 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 71.0 | 71.0 | 73.0 | 0.9726 | 0.9726 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.267 | 4.0 | 4 | 1.7835 | 0.0073 | 769.3428 | 533.2678 | 74.0 | 299.0 | 0.2475 | 74.0 | 0.2475 | 7.0 | 7.0 | 64.0 | 0.1094 | 0.1094 | 67.0 | 67.0 | 73.0 | 0.9178 | 0.9178 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.0678 | 5.0 | 5 | 1.7931 | 0.0073 | 773.4821 | 536.1369 | 80.0 | 299.0 | 0.2676 | 80.0 | 0.2676 | 15.0 | 15.0 | 64.0 | 0.2344 | 0.2344 | 56.0 | 56.0 | 73.0 | 0.7671 | 0.7671 | 8.0 | 8.0 | 78.0 | 0.1026 | 0.1026 | 1.0 | 1.0 | 83.0 | 0.0120 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.5476 | 6.0 | 6 | 2.2998 | 0.0073 | 992.0668 | 687.6483 | 80.0 | 299.0 | 0.2676 | 64.0 | 0.2140 | 18.0 | 31.0 | 64.0 | 0.4844 | 0.2812 | 22.0 | 25.0 | 73.0 | 0.3425 | 0.3014 | 8.0 | 8.0 | 78.0 | 0.1026 | 0.1026 | 16.0 | 16.0 | 83.0 | 0.1928 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.1661 | 7.0 | 7 | 2.6623 | 0.0073 | 1148.4055 | 796.0140 | 87.0 | 299.0 | 0.2910 | 39.0 | 0.1304 | 3.0 | 15.0 | 64.0 | 0.2344 | 0.0469 | 19.0 | 53.0 | 73.0 | 0.7260 | 0.2603 | 8.0 | 10.0 | 78.0 | 0.1282 | 0.1026 | 9.0 | 9.0 | 83.0 | 0.1084 | 0.1084 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0151 | 8.0 | 8 | 3.6879 | 0.0073 | 1590.8183 | 1102.6712 | 95.0 | 299.0 | 0.3177 | 51.0 | 0.1706 | 5.0 | 19.0 | 64.0 | 0.2969 | 0.0781 | 25.0 | 50.0 | 73.0 | 0.6849 | 0.3425 | 11.0 | 14.0 | 78.0 | 0.1795 | 0.1410 | 10.0 | 12.0 | 83.0 | 0.1446 | 0.1205 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0005 | 9.0 | 9 | 4.7104 | 0.0073 | 2031.9081 | 1408.4113 | 102.0 | 299.0 | 0.3411 | 62.0 | 0.2074 | 7.0 | 18.0 | 64.0 | 0.2812 | 0.1094 | 27.0 | 46.0 | 73.0 | 0.6301 | 0.3699 | 19.0 | 27.0 | 78.0 | 0.3462 | 0.2436 | 9.0 | 11.0 | 83.0 | 0.1325 | 0.1084 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 10 | 5.5714 | 0.0073 | 2403.3181 | 1665.8531 | 98.0 | 299.0 | 0.3278 | 64.0 | 0.2140 | 5.0 | 15.0 | 64.0 | 0.2344 | 0.0781 | 28.0 | 42.0 | 73.0 | 0.5753 | 0.3836 | 24.0 | 33.0 | 78.0 | 0.4231 | 0.3077 | 7.0 | 8.0 | 83.0 | 0.0964 | 0.0843 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 11 | 6.2048 | 0.0073 | 2676.5357 | 1855.2332 | 99.0 | 299.0 | 0.3311 | 71.0 | 0.2375 | 5.0 | 15.0 | 64.0 | 0.2344 | 0.0781 | 29.0 | 40.0 | 73.0 | 0.5479 | 0.3973 | 29.0 | 35.0 | 78.0 | 0.4487 | 0.3718 | 8.0 | 9.0 | 83.0 | 0.1084 | 0.0964 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 12 | 6.6923 | 0.0073 | 2886.8300 | 2000.9981 | 98.0 | 299.0 | 0.3278 | 74.0 | 0.2475 | 5.0 | 14.0 | 64.0 | 0.2188 | 0.0781 | 30.0 | 40.0 | 73.0 | 0.5479 | 0.4110 | 33.0 | 37.0 | 78.0 | 0.4744 | 0.4231 | 6.0 | 7.0 | 83.0 | 0.0843 | 0.0723 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 13 | 7.1236 | 0.0073 | 3072.8734 | 2129.9536 | 100.0 | 299.0 | 0.3344 | 77.0 | 0.2575 | 5.0 | 14.0 | 64.0 | 0.2188 | 0.0781 | 29.0 | 36.0 | 73.0 | 0.4932 | 0.3973 | 35.0 | 42.0 | 78.0 | 0.5385 | 0.4487 | 8.0 | 8.0 | 83.0 | 0.0964 | 0.0964 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 7.4788 | 0.0073 | 3226.1112 | 2236.1699 | 98.0 | 299.0 | 0.3278 | 78.0 | 0.2609 | 5.0 | 13.0 | 64.0 | 0.2031 | 0.0781 | 31.0 | 36.0 | 73.0 | 0.4932 | 0.4247 | 36.0 | 43.0 | 78.0 | 0.5513 | 0.4615 | 6.0 | 6.0 | 83.0 | 0.0723 | 0.0723 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 7.7339 | 0.0073 | 3336.1252 | 2312.4258 | 98.0 | 299.0 | 0.3278 | 78.0 | 0.2609 | 5.0 | 12.0 | 64.0 | 0.1875 | 0.0781 | 31.0 | 36.0 | 73.0 | 0.4932 | 0.4247 | 37.0 | 45.0 | 78.0 | 0.5769 | 0.4744 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 7.9662 | 0.0073 | 3436.3575 | 2381.9015 | 100.0 | 299.0 | 0.3344 | 82.0 | 0.2742 | 5.0 | 12.0 | 64.0 | 0.1875 | 0.0781 | 32.0 | 37.0 | 73.0 | 0.5068 | 0.4384 | 39.0 | 45.0 | 78.0 | 0.5769 | 0.5 | 6.0 | 6.0 | 83.0 | 0.0723 | 0.0723 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 8.1410 | 0.0073 | 3511.7307 | 2434.1462 | 98.0 | 299.0 | 0.3278 | 81.0 | 0.2709 | 5.0 | 12.0 | 64.0 | 0.1875 | 0.0781 | 31.0 | 36.0 | 73.0 | 0.4932 | 0.4247 | 40.0 | 45.0 | 78.0 | 0.5769 | 0.5128 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 8.2545 | 0.0073 | 3560.7299 | 2468.1099 | 98.0 | 299.0 | 0.3278 | 79.0 | 0.2642 | 5.0 | 13.0 | 64.0 | 0.2031 | 0.0781 | 29.0 | 34.0 | 73.0 | 0.4658 | 0.3973 | 40.0 | 46.0 | 78.0 | 0.5897 | 0.5128 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 8.3711 | 0.0073 | 3610.9981 | 2502.9531 | 98.0 | 299.0 | 0.3278 | 81.0 | 0.2709 | 5.0 | 12.0 | 64.0 | 0.1875 | 0.0781 | 31.0 | 36.0 | 73.0 | 0.4932 | 0.4247 | 40.0 | 45.0 | 78.0 | 0.5769 | 0.5128 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 8.4942 | 0.0073 | 3664.0992 | 2539.7600 | 98.0 | 299.0 | 0.3278 | 79.0 | 0.2642 | 5.0 | 13.0 | 64.0 | 0.2031 | 0.0781 | 30.0 | 35.0 | 73.0 | 0.4795 | 0.4110 | 40.0 | 46.0 | 78.0 | 0.5897 | 0.5128 | 4.0 | 4.0 | 83.0 | 0.0482 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 8.5955 | 0.0073 | 3707.7867 | 2570.0419 | 97.0 | 299.0 | 0.3244 | 79.0 | 0.2642 | 5.0 | 13.0 | 64.0 | 0.2031 | 0.0781 | 29.0 | 34.0 | 73.0 | 0.4658 | 0.3973 | 40.0 | 45.0 | 78.0 | 0.5769 | 0.5128 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 8.6160 | 0.0073 | 3716.6263 | 2576.1691 | 99.0 | 299.0 | 0.3311 | 80.0 | 0.2676 | 5.0 | 13.0 | 64.0 | 0.2031 | 0.0781 | 30.0 | 35.0 | 73.0 | 0.4795 | 0.4110 | 40.0 | 46.0 | 78.0 | 0.5897 | 0.5128 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 8.6760 | 0.0073 | 3742.5240 | 2594.1199 | 97.0 | 299.0 | 0.3244 | 80.0 | 0.2676 | 5.0 | 13.0 | 64.0 | 0.2031 | 0.0781 | 29.0 | 33.0 | 73.0 | 0.4521 | 0.3973 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 8.6943 | 0.0073 | 3750.4229 | 2599.5951 | 98.0 | 299.0 | 0.3278 | 79.0 | 0.2642 | 5.0 | 13.0 | 64.0 | 0.2031 | 0.0781 | 29.0 | 33.0 | 73.0 | 0.4521 | 0.3973 | 40.0 | 47.0 | 78.0 | 0.6026 | 0.5128 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 8.7113 | 0.0073 | 3757.7507 | 2604.6743 | 99.0 | 299.0 | 0.3311 | 78.0 | 0.2609 | 5.0 | 13.0 | 64.0 | 0.2031 | 0.0781 | 29.0 | 34.0 | 73.0 | 0.4658 | 0.3973 | 40.0 | 47.0 | 78.0 | 0.6026 | 0.5128 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 8.7311 | 0.0073 | 3766.2962 | 2610.5976 | 97.0 | 299.0 | 0.3244 | 78.0 | 0.2609 | 4.0 | 12.0 | 64.0 | 0.1875 | 0.0625 | 29.0 | 34.0 | 73.0 | 0.4658 | 0.3973 | 41.0 | 47.0 | 78.0 | 0.6026 | 0.5256 | 4.0 | 4.0 | 83.0 | 0.0482 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 8.7594 | 0.0073 | 3778.4903 | 2619.0499 | 98.0 | 299.0 | 0.3278 | 80.0 | 0.2676 | 5.0 | 13.0 | 64.0 | 0.2031 | 0.0781 | 29.0 | 34.0 | 73.0 | 0.4658 | 0.3973 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 8.7681 | 0.0073 | 3782.2393 | 2621.6485 | 97.0 | 299.0 | 0.3244 | 80.0 | 0.2676 | 5.0 | 12.0 | 64.0 | 0.1875 | 0.0781 | 29.0 | 34.0 | 73.0 | 0.4658 | 0.3973 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 8.8233 | 0.0073 | 3806.0822 | 2638.1752 | 97.0 | 299.0 | 0.3244 | 82.0 | 0.2742 | 4.0 | 12.0 | 64.0 | 0.1875 | 0.0625 | 29.0 | 34.0 | 73.0 | 0.4658 | 0.3973 | 44.0 | 46.0 | 78.0 | 0.5897 | 0.5641 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 8.8120 | 0.0073 | 3801.1761 | 2634.7745 | 96.0 | 299.0 | 0.3211 | 78.0 | 0.2609 | 3.0 | 11.0 | 64.0 | 0.1719 | 0.0469 | 29.0 | 34.0 | 73.0 | 0.4658 | 0.3973 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 8.8427 | 0.0073 | 3814.4253 | 2643.9581 | 96.0 | 299.0 | 0.3211 | 80.0 | 0.2676 | 5.0 | 12.0 | 64.0 | 0.1875 | 0.0781 | 29.0 | 34.0 | 73.0 | 0.4658 | 0.3973 | 42.0 | 46.0 | 78.0 | 0.5897 | 0.5385 | 4.0 | 4.0 | 83.0 | 0.0482 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 8.7954 | 0.0073 | 3794.0408 | 2629.8287 | 98.0 | 299.0 | 0.3278 | 82.0 | 0.2742 | 6.0 | 13.0 | 64.0 | 0.2031 | 0.0938 | 29.0 | 33.0 | 73.0 | 0.4521 | 0.3973 | 42.0 | 47.0 | 78.0 | 0.6026 | 0.5385 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 8.8254 | 0.0073 | 3806.9690 | 2638.7898 | 100.0 | 299.0 | 0.3344 | 81.0 | 0.2709 | 5.0 | 13.0 | 64.0 | 0.2031 | 0.0781 | 29.0 | 34.0 | 73.0 | 0.4658 | 0.3973 | 42.0 | 48.0 | 78.0 | 0.6154 | 0.5385 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 34.0 | 34 | 8.8195 | 0.0073 | 3804.4106 | 2637.0165 | 96.0 | 299.0 | 0.3211 | 78.0 | 0.2609 | 3.0 | 11.0 | 64.0 | 0.1719 | 0.0469 | 29.0 | 34.0 | 73.0 | 0.4658 | 0.3973 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 35.0 | 35 | 8.8524 | 0.0073 | 3818.6222 | 2646.8672 | 96.0 | 299.0 | 0.3211 | 80.0 | 0.2676 | 5.0 | 12.0 | 64.0 | 0.1875 | 0.0781 | 29.0 | 34.0 | 73.0 | 0.4658 | 0.3973 | 41.0 | 45.0 | 78.0 | 0.5769 | 0.5256 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 36.0 | 36 | 8.8625 | 0.0073 | 3822.9959 | 2649.8988 | 99.0 | 299.0 | 0.3311 | 81.0 | 0.2709 | 6.0 | 14.0 | 64.0 | 0.2188 | 0.0938 | 29.0 | 34.0 | 73.0 | 0.4658 | 0.3973 | 41.0 | 46.0 | 78.0 | 0.5897 | 0.5256 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 37.0 | 37 | 8.8324 | 0.0073 | 3809.9963 | 2640.8882 | 97.0 | 299.0 | 0.3244 | 80.0 | 0.2676 | 4.0 | 12.0 | 64.0 | 0.1875 | 0.0625 | 29.0 | 34.0 | 73.0 | 0.4658 | 0.3973 | 42.0 | 46.0 | 78.0 | 0.5897 | 0.5385 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 38.0 | 38 | 8.8095 | 0.0073 | 3800.1094 | 2634.0351 | 98.0 | 299.0 | 0.3278 | 83.0 | 0.2776 | 5.0 | 12.0 | 64.0 | 0.1875 | 0.0781 | 29.0 | 34.0 | 73.0 | 0.4658 | 0.3973 | 44.0 | 47.0 | 78.0 | 0.6026 | 0.5641 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 39.0 | 39 | 8.8868 | 0.0073 | 3833.4428 | 2657.1401 | 96.0 | 299.0 | 0.3211 | 79.0 | 0.2642 | 4.0 | 12.0 | 64.0 | 0.1875 | 0.0625 | 29.0 | 34.0 | 73.0 | 0.4658 | 0.3973 | 41.0 | 45.0 | 78.0 | 0.5769 | 0.5256 | 5.0 | 5.0 | 83.0 | 0.0602 | 0.0602 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
rvs/llama3-8b-Instruct-kvc-AWQ-int4-onnx
|
rvs
| 2025-08-19T00:22:15Z | 0 | 0 | null |
[
"onnx",
"text-generation-inference",
"llama",
"llama3",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct",
"region:us"
] | null | 2025-08-19T00:17:17Z |
---
tags:
- text-generation-inference
- llama
- llama3
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
---
# Llama 3 8B Instruct with Key-Value-Cache enabled in ONNX ONNX AWQ (4-bit) format
- Model creator: [Meta Llama](https://huggingface.co/meta-llama)
- Original model: [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
<!-- description start -->
## Description
This repo contains the ONNX files for the ONNX conversion of Llama 3 8B Instruct done by Esperanto Technologies.
The model is in the 4-bit format quantized with AWQ and has the KVC enabled.
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
More here: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ)
<!-- description end -->
## How to download ONNX model and weight files
The easiest way to obtain the model is to clone this whole repo.
Alternatively you can download the files is using the `huggingface-hub` Python library.
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download Esperanto/llama3-8b-Instruct-kvc-AWQ-int4-onnx --local-dir llama3-8b-Instruct-kvc-AWQ-int4-onnx --local-dir-use-symlinks False
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
## How to run from Python code using ONNXRuntime
This model can easily be ran in a CPU using [ONNXRuntime](https://onnxruntime.ai/).
#### First install the packages
```bash
pip3 install onnx==1.16.1
pip3 install onnxruntime==1.17.1
```
#### Example code: generate text with this model
We define the loop with greedy decoding:
```python
import numpy as np
import onnxruntime
import onnx
from transformers import AutoTokenizer
def generate_text(model_path, prompt, tokenizer, max_gen_tokens, total_sequence, window, context):
model = onnx.load(model_path)
#we create the inputs for the first iteration
input_tensor = tokenizer(prompt, return_tensors="pt")
prompt_size = len(input_tensor['input_ids'][0])
actual_input = input_tensor['input_ids']
if prompt_size < window:
actual_input = np.concatenate((tokenizer.bos_token_id*np.ones([1, window - prompt_size], dtype = 'int64'),
actual_input), axis=1)
if prompt_size + max_gen_tokens > total_sequence:
print("ERROR: Longer total sequence is needed!")
return
first_attention = np.concatenate((np.zeros([1, total_sequence - window], dtype = 'int64'),
np.ones((1, window), dtype = 'int64')), axis=1)
max_gen_tokens += prompt_size #we need to generate on top of parsing the prompt
inputs_names =[node.name for node in model.graph.input]
output_names =[node.name for node in model.graph.output]
n_heads = 8 #gqa-heads of the kvc
inputs_dict = {}
inputs_dict['input_ids'] = actual_input[:, :window].reshape(1, window).numpy()
inputs_dict['attention_mask'] = first_attention
index_pos = sum(first_attention[0])
inputs_dict['position_ids'] = np.concatenate((np.zeros([1, total_sequence - index_pos], dtype = 'int64'), np.arange(index_pos, dtype = 'int64').reshape(1, index_pos)), axis=1)
inputs_dict['tree_attention'] = np.triu(-65504*np.ones(total_sequence), k= 1).astype('float16').reshape(1, 1, total_sequence, total_sequence)
for name in inputs_names:
if name == 'input_ids' or name == 'attention_mask' or name == 'position_ids' or name == 'tree_attention': continue
inputs_dict[name] = np.zeros([1, n_heads, context-window, 128], dtype="float16")
index = 0
new_token = np.array([10])
next_index = window
old_j = 0
total_input = actual_input.numpy()
rt_session = onnxruntime.InferenceSession(model_path)
## We run the inferences
while next_index < max_gen_tokens:
if new_token.any() == tokenizer.eos_token_id:
break
#inference
output = rt_session.run(output_names, inputs_dict)
outs_dictionary = {name: content for (name, content) in zip (output_names, output)}
#we prepare the inputs for the next inference
for name in inputs_names:
if name == 'input_ids':
old_j = next_index
if next_index < prompt_size:
if prompt_size - next_index >= window: next_index += window
else: next_index = prompt_size
j = next_index - window
else:
next_index +=1
j = next_index - window
new_token = outs_dictionary['logits'].argmax(-1).reshape(1, window)
total_input = np.concatenate((total_input, new_token[: , -1:]), axis = 1)
inputs_dict['input_ids']= total_input[:, j:next_index].reshape(1, window)
elif name == 'attention_mask':
inputs_dict['attention_mask'] = np.concatenate((np.zeros((1, total_sequence-next_index), dtype = 'int64'), np.ones((1, next_index), dtype = 'int64')), axis=1)
elif name == 'position_ids':
inputs_dict['position_ids'] = np.concatenate((np.zeros([1, total_sequence - next_index], dtype = 'int64'), np.arange(next_index, dtype = 'int64').reshape(1, next_index)), axis=1)
elif name == 'tree_attention': continue
else:
old_name = name.replace("past_key_values", "present")
inputs_dict[name] = outs_dictionary[old_name][:, :, next_index-old_j:context-window+(next_index - old_j), :]
answer = tokenizer.decode(total_input[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
return answer
```
We now run the inferences:
```python
tokenizer = AutoTokenizer.from_pretrained("Esperanto/llama3-8b-Instruct-kvc-AWQ-int4-onnx-onnx")
model_path = "llama3-8b-Instruct-kvc-AWQ-int4-onnx/model.onnx"
max_gen_tokens = 20 #number of tokens we want tog eneral
total_sequence = 128 #total sequence_length
context = 1024 #the context to extend the kvc
window = 16 #number of tokens we want to parse at the time
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
generated = generate_text(model_path, prompt, tokenizer, max_gen_tokens, total_sequence, window, context)
print(generated)
```
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755561345
|
pempekmangedd
| 2025-08-19T00:22:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:22:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
g-assismoraes/Qwen3-4B-Base-0.4aki-alpha0.08-var-hatebr-ep30
|
g-assismoraes
| 2025-08-19T00:19:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T00:16:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755561277
|
vwzyrraz7l
| 2025-08-19T00:19:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:19:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
torchao-testing/single-linear-INT4-preshuffled-v2-0.13-dev
|
torchao-testing
| 2025-08-19T00:19:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-18T23:47:39Z |
```
import torch
import io
model = torch.nn.Sequential(torch.nn.Linear(32, 256, dtype=torch.bfloat16, device="cuda"))
from torchao.quantization import Int4WeightOnlyConfig, quantize_
quant_config = Int4WeightOnlyConfig(group_size=128, packing_format="preshuffled", version=2)
quantize_(model, quant_config)
example_inputs = (torch.randn(2, 32, dtype=torch.bfloat16, device="cuda"),)
output = model(*example_inputs)
# Push to hub
USER_ID = "torchao-testing"
MODEL_NAME = "single-linear"
save_to = f"{USER_ID}/{MODEL_NAME}-FP8-v2-0.13-dev"
from huggingface_hub import HfApi
api = HfApi()
buf = io.BytesIO()
torch.save(model.state_dict(), buf)
api.create_repo(save_to, repo_type="model", exist_ok=True)
api.upload_file(
path_or_fileobj=buf,
path_in_repo="model.bin",
repo_id=save_to,
)
buf = io.BytesIO()
torch.save(example_inputs, buf)
api.upload_file(
path_or_fileobj=buf,
path_in_repo="model_inputs.pt",
repo_id=save_to,
)
buf = io.BytesIO()
torch.save(output, buf)
api.upload_file(
path_or_fileobj=buf,
path_in_repo="model_output.pt",
repo_id=save_to,
)
```
|
torchao-testing/single-linear-FP8-v2-0.13-dev
|
torchao-testing
| 2025-08-19T00:18:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-18T23:51:47Z |
```
import torch
import io
model = torch.nn.Sequential(torch.nn.Linear(32, 256, dtype=torch.bfloat16, device="cuda"))
from torchao.quantization import quantize_, Float8DynamicActivationFloat8WeightConfig, PerRow
quant_config = Float8DynamicActivationFloat8WeightConfig(granularity=PerRow())
quantize_(model, quant_config)
example_inputs = (torch.randn(2, 32, dtype=torch.bfloat16, device="cuda"),)
output = model(*example_inputs)
# Push to hub
USER_ID = "torchao-testing"
MODEL_NAME = "single-linear"
save_to = f"{USER_ID}/{MODEL_NAME}-FP8-v2-0.13-dev"
from huggingface_hub import HfApi
api = HfApi()
buf = io.BytesIO()
torch.save(model.state_dict(), buf)
api.create_repo(save_to, repo_type="model", exist_ok=True)
api.upload_file(
path_or_fileobj=buf,
path_in_repo="model.bin",
repo_id=save_to,
)
buf = io.BytesIO()
torch.save(example_inputs, buf)
api.upload_file(
path_or_fileobj=buf,
path_in_repo="model_inputs.pt",
repo_id=save_to,
)
buf = io.BytesIO()
torch.save(output, buf)
api.upload_file(
path_or_fileobj=buf,
path_in_repo="model_output.pt",
repo_id=save_to,
)
```
|
AnonymousCS/xlmr_spanish_immigration
|
AnonymousCS
| 2025-08-19T00:17:56Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T00:14:48Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_spanish_immigration
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_spanish_immigration
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2356
- Accuracy: 0.9231
- 1-f1: 0.8913
- 1-recall: 0.9535
- 1-precision: 0.8367
- Balanced Acc: 0.9308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.2922 | 1.0 | 5 | 0.1937 | 0.9308 | 0.9011 | 0.9535 | 0.8542 | 0.9365 |
| 0.0836 | 2.0 | 10 | 0.1749 | 0.9538 | 0.9302 | 0.9302 | 0.9302 | 0.9479 |
| 0.1733 | 3.0 | 15 | 0.1995 | 0.9462 | 0.9213 | 0.9535 | 0.8913 | 0.9480 |
| 0.0836 | 4.0 | 20 | 0.2356 | 0.9231 | 0.8913 | 0.9535 | 0.8367 | 0.9308 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755562519
|
IvanJAjebu
| 2025-08-19T00:16:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:16:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755561095
|
ihsanridzi
| 2025-08-19T00:16:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:16:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755560984
|
mang3dd
| 2025-08-19T00:15:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:15:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vitaliidev/Affine-009
|
vitaliidev
| 2025-08-19T00:14:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"en",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T23:30:05Z |
---
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
tags:
- unsloth
- transformers
---
# Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing).
Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing).
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## β¨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.1 8b** | [βΆοΈ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [βΆοΈ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma-2 9b** | [βΆοΈ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [βΆοΈ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **TinyLlama** | [βΆοΈ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **DPO - Zephyr** | [βΆοΈ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
# Qwen2.5-Coder-1.5B-Instruct
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). For Qwen2.5-Coder, we release three base language models and instruction-tuned language models, 1.5, 7 and 32 (coming soon) billion parameters. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the instruction-tuned 1.5B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 1.54B
- Number of Paramaters (Non-Embedding): 1.31B
- Number of Layers: 28
- Number of Attention Heads (GQA): 12 for Q and 2 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-Coder-1.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "write a quick sort algorithm."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [π blog](https://qwenlm.github.io/blog/qwen2.5-coder/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen25_coder,
title={Qwen2.5-Coder Technical Report},
author={Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Kai Dang, An Yang, Rui Men, Fei Huang, Xingzhang Ren, Xuancheng Ren, Jingren Zhou and Junyang Lin},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
chainway9/blockassist-bc-untamed_quick_eel_1755560524
|
chainway9
| 2025-08-19T00:11:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:11:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755560682
|
sampingkaca72
| 2025-08-19T00:09:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:09:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755560587
|
lisaozill03
| 2025-08-19T00:09:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:08:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_swedish_immigration
|
AnonymousCS
| 2025-08-19T00:08:56Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T00:04:58Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_swedish_immigration
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_swedish_immigration
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2447
- Accuracy: 0.9231
- 1-f1: 0.875
- 1-recall: 0.8140
- 1-precision: 0.9459
- Balanced Acc: 0.8955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.4434 | 1.0 | 5 | 0.2792 | 0.9077 | 0.8537 | 0.8140 | 0.8974 | 0.8840 |
| 0.3239 | 2.0 | 10 | 0.2571 | 0.9 | 0.8312 | 0.7442 | 0.9412 | 0.8606 |
| 0.3 | 3.0 | 15 | 0.2381 | 0.9231 | 0.875 | 0.8140 | 0.9459 | 0.8955 |
| 0.3387 | 4.0 | 20 | 0.2361 | 0.9231 | 0.8780 | 0.8372 | 0.9231 | 0.9014 |
| 0.3055 | 5.0 | 25 | 0.2544 | 0.9231 | 0.8718 | 0.7907 | 0.9714 | 0.8896 |
| 0.126 | 6.0 | 30 | 0.2447 | 0.9231 | 0.875 | 0.8140 | 0.9459 | 0.8955 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755560366
|
quantumxnode
| 2025-08-19T00:05:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:05:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ARG-NCTU/detr-resnet-50-finetuned-federated-fedprox-masked-3-clients-3-datasets
|
ARG-NCTU
| 2025-08-19T00:05:40Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2025-08-06T08:04:07Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50-finetuned-federated-fedprox-masked-3-clients-3-datasets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-federated-fedprox-masked-3-clients-3-datasets
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.1.0
- Tokenizers 0.21.4
|
koloni/blockassist-bc-deadly_graceful_stingray_1755560329
|
koloni
| 2025-08-19T00:04:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:04:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755560145
|
thanobidex
| 2025-08-19T00:01:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:01:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755560084
|
indoempatnol
| 2025-08-19T00:00:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T00:00:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dgambettaphd/M_mis_run2_gen10_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST
|
dgambettaphd
| 2025-08-18T23:59:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T23:59:30Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zkdeng/10-10-convnextv2-base-22k-384-finetuned-spiderTraining1000-1000-finetuned-spiderTraining100-100
|
zkdeng
| 2025-08-18T23:58:05Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"base_model:zkdeng/10-convnextv2-base-22k-384-finetuned-spiderTraining1000-1000",
"base_model:finetune:zkdeng/10-convnextv2-base-22k-384-finetuned-spiderTraining1000-1000",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-18T22:12:16Z |
---
library_name: transformers
license: apache-2.0
base_model: zkdeng/10-convnextv2-base-22k-384-finetuned-spiderTraining1000-1000
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: 10-10-convnextv2-base-22k-384-finetuned-spiderTraining1000-1000-finetuned-spiderTraining100-100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 10-10-convnextv2-base-22k-384-finetuned-spiderTraining1000-1000-finetuned-spiderTraining100-100
This model is a fine-tuned version of [zkdeng/10-convnextv2-base-22k-384-finetuned-spiderTraining1000-1000](https://huggingface.co/zkdeng/10-convnextv2-base-22k-384-finetuned-spiderTraining1000-1000) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.01
- Precision: 0.0001
- Recall: 0.01
- F1: 0.0002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 15.9975 | 1.0 | 125 | nan | 0.009 | 0.0126 | 0.0078 | 0.0084 |
| 0.0 | 2.0 | 250 | nan | 0.01 | 0.0001 | 0.01 | 0.0002 |
| 0.0 | 3.0 | 375 | nan | 0.01 | 0.0001 | 0.01 | 0.0002 |
| 0.0 | 4.0 | 500 | nan | 0.01 | 0.0001 | 0.01 | 0.0002 |
| 0.0 | 5.0 | 625 | nan | 0.01 | 0.0001 | 0.01 | 0.0002 |
| 0.0 | 6.0 | 750 | nan | 0.01 | 0.0001 | 0.01 | 0.0002 |
| 0.0 | 7.0 | 875 | nan | 0.01 | 0.0001 | 0.01 | 0.0002 |
| 0.0 | 8.0 | 1000 | nan | 0.01 | 0.0001 | 0.01 | 0.0002 |
| 0.0 | 9.0 | 1125 | nan | 0.01 | 0.0001 | 0.01 | 0.0002 |
| 0.0 | 10.0 | 1250 | nan | 0.01 | 0.0001 | 0.01 | 0.0002 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755561304
|
IvanJAjebu
| 2025-08-18T23:56:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T23:56:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/IntrinSight-4B-GGUF
|
mradermacher
| 2025-08-18T23:56:35Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:General-Medical-AI/GMAI-Reasoning10K",
"base_model:qiuxi337/IntrinSight-4B",
"base_model:quantized:qiuxi337/IntrinSight-4B",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-18T21:51:50Z |
---
base_model: qiuxi337/IntrinSight-4B
datasets:
- General-Medical-AI/GMAI-Reasoning10K
language:
- en
library_name: transformers
license: gemma
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/qiuxi337/IntrinSight-4B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#IntrinSight-4B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/IntrinSight-4B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/IntrinSight-4B-GGUF/resolve/main/IntrinSight-4B.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.7 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/IntrinSight-4B-GGUF/resolve/main/IntrinSight-4B.mmproj-f16.gguf) | mmproj-f16 | 1.0 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/IntrinSight-4B-GGUF/resolve/main/IntrinSight-4B.Q2_K.gguf) | Q2_K | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/IntrinSight-4B-GGUF/resolve/main/IntrinSight-4B.Q3_K_S.gguf) | Q3_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/IntrinSight-4B-GGUF/resolve/main/IntrinSight-4B.Q3_K_M.gguf) | Q3_K_M | 2.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/IntrinSight-4B-GGUF/resolve/main/IntrinSight-4B.Q3_K_L.gguf) | Q3_K_L | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/IntrinSight-4B-GGUF/resolve/main/IntrinSight-4B.IQ4_XS.gguf) | IQ4_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/IntrinSight-4B-GGUF/resolve/main/IntrinSight-4B.Q4_K_S.gguf) | Q4_K_S | 2.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IntrinSight-4B-GGUF/resolve/main/IntrinSight-4B.Q4_K_M.gguf) | Q4_K_M | 3.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IntrinSight-4B-GGUF/resolve/main/IntrinSight-4B.Q5_K_S.gguf) | Q5_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/IntrinSight-4B-GGUF/resolve/main/IntrinSight-4B.Q5_K_M.gguf) | Q5_K_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/IntrinSight-4B-GGUF/resolve/main/IntrinSight-4B.Q6_K.gguf) | Q6_K | 3.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/IntrinSight-4B-GGUF/resolve/main/IntrinSight-4B.Q8_0.gguf) | Q8_0 | 4.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/IntrinSight-4B-GGUF/resolve/main/IntrinSight-4B.f16.gguf) | f16 | 9.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755559784
|
helmutsukocok
| 2025-08-18T23:56:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T23:56:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NexVeridian/OpenReasoning-Nemotron-32B-8bit
|
NexVeridian
| 2025-08-18T23:53:53Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen2",
"nvidia",
"code",
"text-generation",
"conversational",
"en",
"base_model:nvidia/OpenReasoning-Nemotron-32B",
"base_model:quantized:nvidia/OpenReasoning-Nemotron-32B",
"license:cc-by-4.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-18T23:38:21Z |
---
license: cc-by-4.0
language:
- en
base_model: nvidia/OpenReasoning-Nemotron-32B
pipeline_tag: text-generation
library_name: mlx
tags:
- nvidia
- code
- mlx
---
# NexVeridian/OpenReasoning-Nemotron-32B-8bit
This model [NexVeridian/OpenReasoning-Nemotron-32B-8bit](https://huggingface.co/NexVeridian/OpenReasoning-Nemotron-32B-8bit) was
converted to MLX format from [nvidia/OpenReasoning-Nemotron-32B](https://huggingface.co/nvidia/OpenReasoning-Nemotron-32B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/OpenReasoning-Nemotron-32B-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
alexmorley/hlth-1
|
alexmorley
| 2025-08-18T23:53:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-18T20:44:51Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hlth-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hlth-1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4049
- Accuracy: 0.8561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.54.0
- Pytorch 2.5.1+cu118
- Datasets 4.0.0
- Tokenizers 0.21.4
|
HectorHe/Qwen3-MOE-sft-math7k
|
HectorHe
| 2025-08-18T23:53:08Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:HectorHe/math7k",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:finetune:Qwen/Qwen3-30B-A3B",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T08:07:48Z |
---
base_model: Qwen/Qwen3-30B-A3B
datasets: HectorHe/math7k
library_name: transformers
model_name: Qwen3-MOE-sft-math7k
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen3-MOE-sft-math7k
This model is a fine-tuned version of [Qwen/Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) on the [HectorHe/math7k](https://huggingface.co/datasets/HectorHe/math7k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="HectorHe/Qwen3-MOE-sft-math7k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hector_-carnegie-mellon-university/huggingface/runs/7j8i2801)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755559449
|
katanyasekolah
| 2025-08-18T23:52:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T23:52:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chooseL1fe/blockassist-bc-thorny_flightless_albatross_1755560721
|
chooseL1fe
| 2025-08-18T23:51:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny flightless albatross",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T23:51:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny flightless albatross
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
r2owb0/act1
|
r2owb0
| 2025-08-18T23:51:32Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"imitation-learning",
"so101",
"dataset:r2owb0/so101-DS1",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-18T23:44:15Z |
---
license: apache-2.0
library_name: lerobot
pipeline_tag: robotics
tags:
- robotics
- lerobot
- act
- imitation-learning
- so101
model_name: act
datasets: r2owb0/so101-DS1
base_model: lerobot/smolvla_base
---
# ACT Model for SO101 Robot
This is an Action Chunking Transformer (ACT) model trained for the SO101 robot using LeRobot. The model was trained on demonstration data collected from teleoperation sessions.
## Model Details
### Architecture
- **Model Type**: Action Chunking Transformer (ACT)
- **Vision Backbone**: ResNet18 with ImageNet pretrained weights
- **Transformer Configuration**:
- Hidden dimension: 512
- Number of heads: 8
- Encoder layers: 4
- Decoder layers: 1
- Feedforward dimension: 3200
- **VAE**: Enabled with 32-dimensional latent space
- **Chunk Size**: 50 steps
- **Action Steps**: 15 steps per inference
### Camera Setup
The model uses a **dual-camera setup** for robust perception:
1. **Wrist Camera** (`observation.images.wrist`):
- Resolution: 240Γ320 pixels
- Position: Mounted on the robot's wrist
- Purpose: Provides close-up, detailed view of manipulation tasks
- Field of view: Narrow, focused on the immediate workspace
2. **Top Camera** (`observation.images.top`):
- Resolution: 480Γ640 pixels
- Position: Mounted above the workspace
- Purpose: Provides broader context and overview of the environment
- Field of view: Wide, captures the entire workspace
### Input/Output Specifications
**Inputs:**
- **Robot State**: 6-dimensional joint positions
- `shoulder_pan.pos`
- `shoulder_lift.pos`
- `elbow_flex.pos`
- `wrist_flex.pos`
- `wrist_roll.pos`
- `gripper.pos`
- **Wrist Camera**: RGB image (240Γ320Γ3)
- **Top Camera**: RGB image (480Γ640Γ3)
**Outputs:**
- **Actions**: 6-dimensional joint commands (same structure as state)
## Training Details
### Dataset
- **Source**: `r2owb0/so101-DS1`
- **Episodes**: 10 demonstration episodes
- **Total Frames**: 5,990 frames
- **Frame Rate**: 30 FPS
- **Robot Type**: SO101 follower robot
### Training Configuration
- **Training Steps**: 25,000
- **Batch Size**: 4
- **Learning Rate**: 1e-5
- **Optimizer**: AdamW with weight decay 1e-4
- **Validation Split**: 10% of episodes
- **Seed**: 1000
### Data Augmentation
The model was trained with comprehensive image augmentation:
- Brightness adjustment (0.8-1.2x)
- Contrast adjustment (0.8-1.2x)
- Saturation adjustment (0.5-1.5x)
- Hue adjustment (Β±0.05)
- Sharpness adjustment (0.5-1.5x)
## Usage
### Installation
```bash
pip install lerobot
```
### Loading the Model
```python
from lerobot.policies import ACTPolicy
from lerobot.configs.policies import ACTConfig
# Load the model
policy = ACTPolicy.from_pretrained("r2owb0/act1")
```
### Evaluation
```bash
lerobot-eval \
--policy.path=r2owb0/act1 \
--env.type=your_env_type \
--eval.n_episodes=10 \
--eval.batch_size=10
```
### Inference
```python
import torch
# Prepare observation
observation = {
"observation.state": torch.tensor([...]), # 6D robot state
"observation.images.wrist": torch.tensor([...]), # 240x320x3 RGB
"observation.images.top": torch.tensor([...]) # 480x640x3 RGB
}
# Get action
with torch.no_grad():
action = policy.select_action(observation)
```
## Hardware Requirements
### Robot Setup
- **Robot**: SO101 follower robot
- **Cameras**:
- Wrist-mounted camera (240Γ320 resolution)
- Top-mounted camera (480Γ640 resolution)
- **Control**: 6-DOF arm with gripper
### Computing Requirements
- **GPU**: CUDA-compatible GPU recommended
- **Memory**: At least 4GB GPU memory
- **Storage**: ~200MB for model weights
## Performance Notes
- The model uses action chunking, predicting 50 steps ahead but executing 15 steps at a time
- Temporal ensembling is disabled for real-time inference
- The model expects normalized inputs (mean/std normalization)
- VAE is enabled for better representation learning
## Limitations
- Trained on a specific robot configuration (SO101)
- Requires the exact camera setup described above
- Performance may vary with different lighting conditions
- Limited to the task domain covered in the training dataset
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{r2owb0_act1,
author = {Robert},
title = {ACT Model for SO101 Robot},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/r2owb0/act1}
}
```
## License
This model is licensed under the Apache 2.0 License.
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755560836
|
IvanJAjebu
| 2025-08-18T23:49:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T23:48:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
slarkcrypto/blockassist-bc-elusive_bellowing_hawk_1755560903
|
slarkcrypto
| 2025-08-18T23:49:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"elusive bellowing hawk",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T23:48:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- elusive bellowing hawk
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755559340
|
ihsanridzi
| 2025-08-18T23:48:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T23:48:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
g-assismoraes/Qwen3-4B-Base-aki-alpha0.08-var-hatebr-ep30-g5-v2
|
g-assismoraes
| 2025-08-18T23:47:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T23:43:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755559208
|
hakimjustbao
| 2025-08-18T23:46:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T23:46:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755559175
|
mang3dd
| 2025-08-18T23:46:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T23:46:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_dutch_immigration
|
AnonymousCS
| 2025-08-18T23:43:52Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-18T23:41:37Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_dutch_immigration
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_dutch_immigration
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2537
- Accuracy: 0.9154
- 1-f1: 0.8642
- 1-recall: 0.8140
- 1-precision: 0.9211
- Balanced Acc: 0.8897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.2515 | 1.0 | 5 | 0.2052 | 0.9385 | 0.9111 | 0.9535 | 0.8723 | 0.9423 |
| 0.1837 | 2.0 | 10 | 0.2165 | 0.9231 | 0.8864 | 0.9070 | 0.8667 | 0.9190 |
| 0.225 | 3.0 | 15 | 0.2537 | 0.9154 | 0.8642 | 0.8140 | 0.9211 | 0.8897 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Nilowave/gemma3-npc-test
|
Nilowave
| 2025-08-18T23:43:13Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T23:30:43Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: gemma3-npc-test
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma3-npc-test
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Nilowave/gemma3-npc-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755558810
|
pempekmangedd
| 2025-08-18T23:40:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T23:40:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755558766
|
lisaozill03
| 2025-08-18T23:39:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T23:39:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
slarkcrypto/blockassist-bc-elusive_bellowing_hawk_1755560267
|
slarkcrypto
| 2025-08-18T23:38:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"elusive bellowing hawk",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T23:38:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- elusive bellowing hawk
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
donoway/ARC-Challenge_Llama-3.2-1B-69bpzmft
|
donoway
| 2025-08-18T23:37:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T23:26:36Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Challenge_Llama-3.2-1B-69bpzmft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Challenge_Llama-3.2-1B-69bpzmft
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4975
- Model Preparation Time: 0.006
- Mdl: 1508.7190
- Accumulated Loss: 1045.7643
- Correct Preds: 106.0
- Total Preds: 299.0
- Accuracy: 0.3545
- Correct Gen Preds: 60.0
- Gen Accuracy: 0.2007
- Correct Gen Preds 32: 8.0
- Correct Preds 32: 24.0
- Total Labels 32: 64.0
- Accuracy 32: 0.375
- Gen Accuracy 32: 0.125
- Correct Gen Preds 33: 15.0
- Correct Preds 33: 29.0
- Total Labels 33: 73.0
- Accuracy 33: 0.3973
- Gen Accuracy 33: 0.2055
- Correct Gen Preds 34: 19.0
- Correct Preds 34: 25.0
- Total Labels 34: 78.0
- Accuracy 34: 0.3205
- Gen Accuracy 34: 0.2436
- Correct Gen Preds 35: 18.0
- Correct Preds 35: 28.0
- Total Labels 35: 83.0
- Accuracy 35: 0.3373
- Gen Accuracy 35: 0.2169
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 1.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.6389 | 0.006 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.5464 | 1.0 | 1 | 1.6389 | 0.006 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.5451 | 2.0 | 2 | 1.9494 | 0.006 | 840.8880 | 582.8591 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 2.1729 | 3.0 | 3 | 1.3883 | 0.006 | 598.8449 | 415.0876 | 89.0 | 299.0 | 0.2977 | 89.0 | 0.2977 | 8.0 | 8.0 | 64.0 | 0.125 | 0.125 | 53.0 | 53.0 | 73.0 | 0.7260 | 0.7260 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 28.0 | 28.0 | 83.0 | 0.3373 | 0.3373 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.2974 | 4.0 | 4 | 2.3810 | 0.006 | 1027.0673 | 711.9088 | 64.0 | 299.0 | 0.2140 | 64.0 | 0.2140 | 64.0 | 64.0 | 64.0 | 1.0 | 1.0 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.3902 | 5.0 | 5 | 1.5833 | 0.006 | 682.9726 | 473.4005 | 68.0 | 299.0 | 0.2274 | 68.0 | 0.2274 | 64.0 | 64.0 | 64.0 | 1.0 | 1.0 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 2.0 | 2.0 | 78.0 | 0.0256 | 0.0256 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.9586 | 6.0 | 6 | 1.6103 | 0.006 | 694.6407 | 481.4882 | 81.0 | 299.0 | 0.2709 | 79.0 | 0.2642 | 30.0 | 32.0 | 64.0 | 0.5 | 0.4688 | 2.0 | 2.0 | 73.0 | 0.0274 | 0.0274 | 17.0 | 17.0 | 78.0 | 0.2179 | 0.2179 | 30.0 | 30.0 | 83.0 | 0.3614 | 0.3614 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.5845 | 7.0 | 7 | 2.2464 | 0.006 | 969.0044 | 671.6627 | 89.0 | 299.0 | 0.2977 | 79.0 | 0.2642 | 30.0 | 36.0 | 64.0 | 0.5625 | 0.4688 | 12.0 | 14.0 | 73.0 | 0.1918 | 0.1644 | 17.0 | 18.0 | 78.0 | 0.2308 | 0.2179 | 20.0 | 21.0 | 83.0 | 0.2530 | 0.2410 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.2352 | 8.0 | 8 | 2.7941 | 0.006 | 1205.2816 | 835.4375 | 97.0 | 299.0 | 0.3244 | 74.0 | 0.2475 | 5.0 | 11.0 | 64.0 | 0.1719 | 0.0781 | 30.0 | 40.0 | 73.0 | 0.5479 | 0.4110 | 19.0 | 21.0 | 78.0 | 0.2692 | 0.2436 | 20.0 | 25.0 | 83.0 | 0.3012 | 0.2410 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0743 | 9.0 | 9 | 3.4975 | 0.006 | 1508.7190 | 1045.7643 | 106.0 | 299.0 | 0.3545 | 60.0 | 0.2007 | 8.0 | 24.0 | 64.0 | 0.375 | 0.125 | 15.0 | 29.0 | 73.0 | 0.3973 | 0.2055 | 19.0 | 25.0 | 78.0 | 0.3205 | 0.2436 | 18.0 | 28.0 | 83.0 | 0.3373 | 0.2169 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0085 | 10.0 | 10 | 3.8403 | 0.006 | 1656.5664 | 1148.2444 | 102.0 | 299.0 | 0.3411 | 51.0 | 0.1706 | 5.0 | 20.0 | 64.0 | 0.3125 | 0.0781 | 10.0 | 24.0 | 73.0 | 0.3288 | 0.1370 | 20.0 | 30.0 | 78.0 | 0.3846 | 0.2564 | 16.0 | 28.0 | 83.0 | 0.3373 | 0.1928 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0014 | 11.0 | 11 | 4.6550 | 0.006 | 2008.0190 | 1391.8527 | 96.0 | 299.0 | 0.3211 | 41.0 | 0.1371 | 5.0 | 23.0 | 64.0 | 0.3594 | 0.0781 | 11.0 | 29.0 | 73.0 | 0.3973 | 0.1507 | 17.0 | 29.0 | 78.0 | 0.3718 | 0.2179 | 8.0 | 15.0 | 83.0 | 0.1807 | 0.0964 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0001 | 12.0 | 12 | 5.6982 | 0.006 | 2458.0069 | 1703.7606 | 88.0 | 299.0 | 0.2943 | 42.0 | 0.1405 | 7.0 | 24.0 | 64.0 | 0.375 | 0.1094 | 14.0 | 33.0 | 73.0 | 0.4521 | 0.1918 | 17.0 | 24.0 | 78.0 | 0.3077 | 0.2179 | 4.0 | 7.0 | 83.0 | 0.0843 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0001 | 13.0 | 13 | 6.9024 | 0.006 | 2977.4599 | 2063.8179 | 91.0 | 299.0 | 0.3043 | 49.0 | 0.1639 | 11.0 | 26.0 | 64.0 | 0.4062 | 0.1719 | 16.0 | 33.0 | 73.0 | 0.4521 | 0.2192 | 19.0 | 27.0 | 78.0 | 0.3462 | 0.2436 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 7.7997 | 0.006 | 3364.5048 | 2332.0970 | 88.0 | 299.0 | 0.2943 | 61.0 | 0.2040 | 15.0 | 25.0 | 64.0 | 0.3906 | 0.2344 | 21.0 | 31.0 | 73.0 | 0.4247 | 0.2877 | 22.0 | 27.0 | 78.0 | 0.3462 | 0.2821 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 8.4535 | 0.006 | 3646.5475 | 2527.5941 | 86.0 | 299.0 | 0.2876 | 66.0 | 0.2207 | 18.0 | 25.0 | 64.0 | 0.3906 | 0.2812 | 21.0 | 30.0 | 73.0 | 0.4110 | 0.2877 | 24.0 | 26.0 | 78.0 | 0.3333 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 8.9836 | 0.006 | 3875.2248 | 2686.1011 | 84.0 | 299.0 | 0.2809 | 67.0 | 0.2241 | 21.0 | 26.0 | 64.0 | 0.4062 | 0.3281 | 19.0 | 27.0 | 73.0 | 0.3699 | 0.2603 | 24.0 | 26.0 | 78.0 | 0.3333 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 9.3455 | 0.006 | 4031.3285 | 2794.3040 | 83.0 | 299.0 | 0.2776 | 70.0 | 0.2341 | 22.0 | 28.0 | 64.0 | 0.4375 | 0.3438 | 21.0 | 26.0 | 73.0 | 0.3562 | 0.2877 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 9.6760 | 0.006 | 4173.8792 | 2893.1126 | 82.0 | 299.0 | 0.2742 | 73.0 | 0.2441 | 25.0 | 28.0 | 64.0 | 0.4375 | 0.3906 | 21.0 | 24.0 | 73.0 | 0.3288 | 0.2877 | 24.0 | 25.0 | 78.0 | 0.3205 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 9.9141 | 0.006 | 4276.5853 | 2964.3030 | 81.0 | 299.0 | 0.2709 | 74.0 | 0.2475 | 27.0 | 29.0 | 64.0 | 0.4531 | 0.4219 | 20.0 | 23.0 | 73.0 | 0.3151 | 0.2740 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 10.0790 | 0.006 | 4347.7385 | 3013.6227 | 82.0 | 299.0 | 0.2742 | 75.0 | 0.2508 | 27.0 | 29.0 | 64.0 | 0.4531 | 0.4219 | 21.0 | 22.0 | 73.0 | 0.3014 | 0.2877 | 24.0 | 25.0 | 78.0 | 0.3205 | 0.3077 | 3.0 | 6.0 | 83.0 | 0.0723 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 10.1972 | 0.006 | 4398.7081 | 3048.9521 | 80.0 | 299.0 | 0.2676 | 75.0 | 0.2508 | 28.0 | 30.0 | 64.0 | 0.4688 | 0.4375 | 20.0 | 21.0 | 73.0 | 0.2877 | 0.2740 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 10.3103 | 0.006 | 4447.5317 | 3082.7941 | 81.0 | 299.0 | 0.2709 | 74.0 | 0.2475 | 27.0 | 29.0 | 64.0 | 0.4531 | 0.4219 | 20.0 | 21.0 | 73.0 | 0.2877 | 0.2740 | 24.0 | 25.0 | 78.0 | 0.3205 | 0.3077 | 3.0 | 6.0 | 83.0 | 0.0723 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 10.4009 | 0.006 | 4486.6061 | 3109.8784 | 81.0 | 299.0 | 0.2709 | 76.0 | 0.2542 | 29.0 | 31.0 | 64.0 | 0.4844 | 0.4531 | 20.0 | 21.0 | 73.0 | 0.2877 | 0.2740 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 10.4894 | 0.006 | 4524.7495 | 3136.3174 | 80.0 | 299.0 | 0.2676 | 76.0 | 0.2542 | 29.0 | 31.0 | 64.0 | 0.4844 | 0.4531 | 20.0 | 21.0 | 73.0 | 0.2877 | 0.2740 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 4.0 | 83.0 | 0.0482 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 10.5657 | 0.006 | 4557.6851 | 3159.1466 | 80.0 | 299.0 | 0.2676 | 75.0 | 0.2508 | 28.0 | 30.0 | 64.0 | 0.4688 | 0.4375 | 20.0 | 21.0 | 73.0 | 0.2877 | 0.2740 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 10.5629 | 0.006 | 4556.4933 | 3158.3205 | 81.0 | 299.0 | 0.2709 | 76.0 | 0.2542 | 29.0 | 31.0 | 64.0 | 0.4844 | 0.4531 | 20.0 | 21.0 | 73.0 | 0.2877 | 0.2740 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 10.6133 | 0.006 | 4578.2155 | 3173.3771 | 79.0 | 299.0 | 0.2642 | 74.0 | 0.2475 | 28.0 | 30.0 | 64.0 | 0.4688 | 0.4375 | 19.0 | 20.0 | 73.0 | 0.2740 | 0.2603 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 10.6343 | 0.006 | 4587.2687 | 3179.6523 | 80.0 | 299.0 | 0.2676 | 75.0 | 0.2508 | 28.0 | 30.0 | 64.0 | 0.4688 | 0.4375 | 20.0 | 21.0 | 73.0 | 0.2877 | 0.2740 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 10.6752 | 0.006 | 4604.9267 | 3191.8920 | 78.0 | 299.0 | 0.2609 | 73.0 | 0.2441 | 27.0 | 29.0 | 64.0 | 0.4531 | 0.4219 | 19.0 | 20.0 | 73.0 | 0.2740 | 0.2603 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 10.7068 | 0.006 | 4618.5561 | 3201.3392 | 80.0 | 299.0 | 0.2676 | 75.0 | 0.2508 | 29.0 | 31.0 | 64.0 | 0.4844 | 0.4531 | 19.0 | 20.0 | 73.0 | 0.2740 | 0.2603 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 10.7169 | 0.006 | 4622.9235 | 3204.3664 | 81.0 | 299.0 | 0.2709 | 76.0 | 0.2542 | 30.0 | 32.0 | 64.0 | 0.5 | 0.4688 | 19.0 | 20.0 | 73.0 | 0.2740 | 0.2603 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 10.7301 | 0.006 | 4628.5873 | 3208.2922 | 79.0 | 299.0 | 0.2642 | 74.0 | 0.2475 | 28.0 | 30.0 | 64.0 | 0.4688 | 0.4375 | 19.0 | 20.0 | 73.0 | 0.2740 | 0.2603 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 10.7590 | 0.006 | 4641.0636 | 3216.9401 | 80.0 | 299.0 | 0.2676 | 75.0 | 0.2508 | 29.0 | 31.0 | 64.0 | 0.4844 | 0.4531 | 19.0 | 20.0 | 73.0 | 0.2740 | 0.2603 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 34.0 | 34 | 10.7597 | 0.006 | 4641.3614 | 3217.1465 | 79.0 | 299.0 | 0.2642 | 74.0 | 0.2475 | 29.0 | 31.0 | 64.0 | 0.4844 | 0.4531 | 18.0 | 19.0 | 73.0 | 0.2603 | 0.2466 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 35.0 | 35 | 10.8047 | 0.006 | 4660.7928 | 3230.6154 | 79.0 | 299.0 | 0.2642 | 74.0 | 0.2475 | 27.0 | 29.0 | 64.0 | 0.4531 | 0.4219 | 20.0 | 21.0 | 73.0 | 0.2877 | 0.2740 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 36.0 | 36 | 10.7758 | 0.006 | 4648.3271 | 3221.9749 | 78.0 | 299.0 | 0.2609 | 73.0 | 0.2441 | 28.0 | 30.0 | 64.0 | 0.4688 | 0.4375 | 18.0 | 19.0 | 73.0 | 0.2603 | 0.2466 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 37.0 | 37 | 10.7398 | 0.006 | 4632.7890 | 3211.2047 | 80.0 | 299.0 | 0.2676 | 75.0 | 0.2508 | 28.0 | 30.0 | 64.0 | 0.4688 | 0.4375 | 20.0 | 21.0 | 73.0 | 0.2877 | 0.2740 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 38.0 | 38 | 10.7692 | 0.006 | 4645.4815 | 3220.0024 | 79.0 | 299.0 | 0.2642 | 74.0 | 0.2475 | 28.0 | 30.0 | 64.0 | 0.4688 | 0.4375 | 19.0 | 20.0 | 73.0 | 0.2740 | 0.2603 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 39.0 | 39 | 10.7422 | 0.006 | 4633.8010 | 3211.9061 | 79.0 | 299.0 | 0.2642 | 74.0 | 0.2475 | 29.0 | 31.0 | 64.0 | 0.4844 | 0.4531 | 18.0 | 19.0 | 73.0 | 0.2603 | 0.2466 | 24.0 | 24.0 | 78.0 | 0.3077 | 0.3077 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
MyResumite/CV_Analyzer
|
MyResumite
| 2025-08-18T23:37:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"region:us"
] |
text-generation
| 2025-08-18T23:36:45Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
koloni/blockassist-bc-deadly_graceful_stingray_1755558529
|
koloni
| 2025-08-18T23:34:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T23:34:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chainway9/blockassist-bc-untamed_quick_eel_1755558268
|
chainway9
| 2025-08-18T23:33:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T23:33:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/RC-Qwen2VL-2b-GGUF
|
mradermacher
| 2025-08-18T23:32:39Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"multimodal",
"llm",
"personalized_multimodal_understanding",
"en",
"base_model:weihongliang/RC-Qwen2VL-2b",
"base_model:quantized:weihongliang/RC-Qwen2VL-2b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-18T21:48:40Z |
---
base_model: weihongliang/RC-Qwen2VL-2b
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- multimodal
- llm
- personalized_multimodal_understanding
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/weihongliang/RC-Qwen2VL-2b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#RC-Qwen2VL-2b-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/RC-Qwen2VL-2b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RC-Qwen2VL-2b-GGUF/resolve/main/RC-Qwen2VL-2b.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/RC-Qwen2VL-2b-GGUF/resolve/main/RC-Qwen2VL-2b.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.8 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/RC-Qwen2VL-2b-GGUF/resolve/main/RC-Qwen2VL-2b.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/RC-Qwen2VL-2b-GGUF/resolve/main/RC-Qwen2VL-2b.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RC-Qwen2VL-2b-GGUF/resolve/main/RC-Qwen2VL-2b.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/RC-Qwen2VL-2b-GGUF/resolve/main/RC-Qwen2VL-2b.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/RC-Qwen2VL-2b-GGUF/resolve/main/RC-Qwen2VL-2b.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RC-Qwen2VL-2b-GGUF/resolve/main/RC-Qwen2VL-2b.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RC-Qwen2VL-2b-GGUF/resolve/main/RC-Qwen2VL-2b.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/RC-Qwen2VL-2b-GGUF/resolve/main/RC-Qwen2VL-2b.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/RC-Qwen2VL-2b-GGUF/resolve/main/RC-Qwen2VL-2b.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/RC-Qwen2VL-2b-GGUF/resolve/main/RC-Qwen2VL-2b.mmproj-f16.gguf) | mmproj-f16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/RC-Qwen2VL-2b-GGUF/resolve/main/RC-Qwen2VL-2b.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/RC-Qwen2VL-2b-GGUF/resolve/main/RC-Qwen2VL-2b.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
razor534/blockassist-bc-lazy_extinct_termite_1755559789
|
razor534
| 2025-08-18T23:30:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lazy extinct termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T23:30:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lazy extinct termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mohammadmahdinouri/moa-adapter-init
|
mohammadmahdinouri
| 2025-08-18T23:30:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"ModernALBERT",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-18T23:30:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
josephkchen/vygr
|
josephkchen
| 2025-08-18T23:28:02Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-18T22:44:17Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: vygr
---
# Vygr
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `vygr` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "vygr",
"lora_weights": "https://huggingface.co/josephkchen/vygr/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('josephkchen/vygr', weight_name='lora.safetensors')
image = pipeline('vygr').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 3500
- Learning rate: 0.0004
- LoRA rank: 32
## Contribute your own examples
You can use the [community tab](https://huggingface.co/josephkchen/vygr/discussions) to add images that show off what youβve made with this LoRA.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.