modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756113070
|
Ferdi3425
| 2025-08-25T09:11:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T09:11:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
swardiantara/ADFLER-xlnet-base-cased
|
swardiantara
| 2025-08-25T09:11:24Z | 7 | 0 | null |
[
"pytorch",
"safetensors",
"xlnet",
"token-classification",
"en",
"base_model:xlnet/xlnet-base-cased",
"base_model:finetune:xlnet/xlnet-base-cased",
"license:mit",
"region:us"
] |
token-classification
| 2024-11-14T11:46:29Z |
---
license: mit
language:
- en
base_model:
- xlnet/xlnet-base-cased
pipeline_tag: token-classification
---
|
Josephzzz/act-fold-towel
|
Josephzzz
| 2025-08-25T09:11:17Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Josephzzz/fold_towel",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-25T09:01:16Z |
---
datasets: Josephzzz/fold_towel
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- robotics
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756112909
|
Ferdi3425
| 2025-08-25T09:08:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T09:08:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eason668/2b509713-486e-488a-bf91-393179e986f5
|
eason668
| 2025-08-25T09:08:18Z | 36 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"text-generation",
"axolotl",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"lora",
"transformers",
"conversational",
"base_model:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-24T11:40:48Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- base_model:adapter:Qwen/Qwen2.5-1.5B
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: 2b509713-486e-488a-bf91-393179e986f5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.13.0.dev0`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f480d36acec9bc4e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: eason668/2b509713-486e-488a-bf91-393179e986f5
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/f480d36acec9bc4e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_max_length: 2048
tokenizer_truncation: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.1
wandb_entity: null
wandb_mode: online
wandb_project: Gradients-On-Demand
wandb_run: 2b509713-486e-488a-bf91-393179e986f5
wandb_runid: 2b509713-486e-488a-bf91-393179e986f5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2b509713-486e-488a-bf91-393179e986f5
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7391
- Memory/max Mem Active(gib): 10.49
- Memory/max Mem Allocated(gib): 10.49
- Memory/device Mem Reserved(gib): 12.74
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 1024
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mem Active(gib) | Mem Allocated(gib) | Mem Reserved(gib) |
|:-------------:|:------:|:----:|:---------------:|:---------------:|:------------------:|:-----------------:|
| No log | 0 | 0 | 1.1335 | 8.89 | 8.89 | 9.42 |
| 0.9481 | 0.0280 | 13 | 0.9827 | 10.49 | 10.49 | 11.72 |
| 0.84 | 0.0561 | 26 | 0.7955 | 10.49 | 10.49 | 12.74 |
| 0.7109 | 0.0841 | 39 | 0.7662 | 10.49 | 10.49 | 12.74 |
| 0.7087 | 0.1121 | 52 | 0.7523 | 10.49 | 10.49 | 12.74 |
| 0.7001 | 0.1401 | 65 | 0.7443 | 10.49 | 10.49 | 12.74 |
| 0.7474 | 0.1682 | 78 | 0.7404 | 10.49 | 10.49 | 12.74 |
| 0.7315 | 0.1962 | 91 | 0.7391 | 10.49 | 10.49 | 12.74 |
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.2
- Pytorch 2.7.1+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
koloni/blockassist-bc-deadly_graceful_stingray_1756111238
|
koloni
| 2025-08-25T09:08:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T09:08:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manueldeprada/dola
|
manueldeprada
| 2025-08-25T09:08:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"custom_generate",
"conversational",
"arxiv:2309.03883",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T09:08:09Z |
---
library_name: transformers
tags:
- custom_generate
---
## Description
Implementation of [Decoding by Contrasting Layers (DoLa)](https://huggingface.co/papers/2309.03883),
a contrastive decoding strategy for improving factuality and reducing hallucinations in language model outputs.
DoLa works by **contrasting the logits** from the final layer with those from earlier layers of the model,
amplifying factual knowledge localized in specific layers and suppressing spurious information.
This can be useful for:
* **Short-answer tasks** (e.g., TruthfulQA) β using higher layers (`dola_layers="high"`)
* **Long-answer reasoning tasks** (e.g., GSM8K, StrategyQA, FACTOR, VicunaQA) β using lower layers (`dola_layers="low"`)
DoLa is **not recommended for smaller models** such as GPT-2, as the improvement may be negligible.
This implementation matches the `DoLa` functionality present in `transformers<4.53.0`.
---
## Base model
* [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)
---
## Model compatibility
* Decoder-only transformer models
---
## Additional Arguments
* **`dola_layers`** (*str* or *List\[int]*, optional):
Which earlier layers to contrast with the final layer. Can be:
* `"low"` β lower half of layers (recommended for long answers)
* `"high"` β upper half of layers (recommended for short answers)
* List of integer indices (e.g., `[18, 20]`)
**Note:**
* Layer 0 is the word embedding; layer 1 is the first transformer block.
* If the model has tied word embeddings, layer 0 is skipped and counting starts at layer 2.
* Typical defaults:
| # Layers | `"low"` range | `"high"` range |
| -------- | ------------------- | ------------------- |
| > 40 | `(0, 20, 2)` | `(N - 20, N, 2)` |
| β€ 40 | `range(0, N//2, 2)` | `range(N//2, N, 2)` |
* **`repetition_penalty`** (*float*, optional, defaults to `None`):
Helps reduce repetition. A value of `1.2` is recommended.
---
## Output Type changes
* The `generate` method output remains the same as default `transformers` generation,
but logits are post-processed using the DoLa contrastive scoring before token selection.
---
## Example usage
### Using higher layers (short-answer tasks)
```python
# requires `transformers>=4.56.0`, previously, it was part of the library
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen3-0.6B", torch_dtype=torch.float16
).to("cuda")
inputs = tokenizer("What is the highest peak in the world?", return_tensors="pt").to("cuda")
outputs = model.generate(
**inputs,
max_new_tokens=50,
do_sample=False,
custom_generate="transformers-community/dola",
trust_remote_code=True,
dola_layers="high"
)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
```
---
### Contrasting specific layers
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen3-0.6B", torch_dtype=torch.float16
).to("cuda")
inputs = tokenizer("What is the highest peak in the world?", return_tensors="pt").to("cuda")
outputs = model.generate(
**inputs,
max_new_tokens=50,
do_sample=False,
repetition_penalty=1.2,
custom_generate="transformers-community/dola",
trust_remote_code=True,
dola_layers=[18, 20]
)
# Only decode the newly generated tokens
print(tokenizer.batch_decode(outputs[:, inputs.input_ids.shape[-1]:], skip_special_tokens=True))
```
|
quanglequocduy/hipages_sentiment
|
quanglequocduy
| 2025-08-25T09:08:09Z | 3 | 0 | null |
[
"safetensors",
"distilbert",
"text-classification",
"sentiment-analysis",
"en",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2025-08-25T03:30:23Z |
---
language: en
license: apache-2.0
pipeline_tag: text-classification
tags:
- text-classification
- sentiment-analysis
---
# Sentiment Analysis for Hipages Homeowner Reviews
This is a fine-tuned DistilBERT model for classifying sentiment as positive or negative.
**Model:** `distilbert-base-uncased`
**Dataset:** Custom dataset from Hipages
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756112750
|
Ferdi3425
| 2025-08-25T09:06:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T09:06:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
andyyy324/blockassist-bc-dappled_fierce_alligator_1756111442
|
andyyy324
| 2025-08-25T09:05:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dappled fierce alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T09:05:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dappled fierce alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chainway9/blockassist-bc-untamed_quick_eel_1756111198
|
chainway9
| 2025-08-25T09:05:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T09:05:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756112620
|
eusuf01
| 2025-08-25T09:04:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T09:04:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756112471
|
Ferdi3425
| 2025-08-25T09:01:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T09:01:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
acidjp/blockassist-bc-pesty_extinct_prawn_1756108119
|
acidjp
| 2025-08-25T09:00:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:59:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arqwe23/blockassist-bc-gregarious_nasty_prawn_1756111296
|
arqwe23
| 2025-08-25T08:58:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gregarious nasty prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:58:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gregarious nasty prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
albertuspekerti/whispertiny_fruit25syl_v7_2
|
albertuspekerti
| 2025-08-25T08:58:03Z | 108 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"base_model:albertuspekerti/whispertiny_fruit25syl_v3_2",
"base_model:finetune:albertuspekerti/whispertiny_fruit25syl_v3_2",
"license:apache-2.0",
"region:us"
] | null | 2025-08-12T02:47:49Z |
---
license: apache-2.0
base_model: albertuspekerti/whispertiny_fruit25syl_v3_2
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whispertiny_fruit25syl_v7_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whispertiny_fruit25syl_v7_2
This model is a fine-tuned version of [albertuspekerti/whispertiny_fruit25syl_v3_2](https://huggingface.co/albertuspekerti/whispertiny_fruit25syl_v3_2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0405
- Wer: 2.34
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 900000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:---:|:---:|:---:|:---:|:---:|
| 0.0015 | 0.00 | 2000 | 0.1650 | 13.69 |
| 0.0023 | 0.00 | 4000 | 0.4859 | 26.23 |
| 0.0017 | 0.01 | 6000 | 0.3551 | 23.24 |
| 0.0030 | 0.01 | 8000 | 0.1757 | 18.02 |
| 0.0015 | 0.01 | 10000 | 0.2069 | 18.25 |
| 0.0365 | 1.00 | 12000 | 0.7034 | 41.99 |
| 0.0007 | 1.00 | 14000 | 0.2721 | 20.08 |
| 0.0012 | 1.01 | 16000 | 0.1604 | 14.07 |
| 0.0038 | 1.01 | 18000 | 0.5626 | 28.47 |
| 0.0019 | 1.01 | 20000 | 0.2777 | 23.72 |
| 0.0031 | 1.01 | 22000 | 0.2175 | 17.92 |
| 0.0199 | 2.00 | 24000 | 0.2511 | 17.33 |
| 0.0014 | 2.00 | 26000 | 0.1804 | 16.60 |
| 0.0007 | 2.01 | 28000 | 0.1997 | 15.62 |
| 0.0017 | 2.01 | 30000 | 0.1679 | 13.25 |
| 0.0021 | 2.01 | 32000 | 0.6248 | 29.96 |
| 0.0020 | 2.01 | 34000 | 0.2805 | 22.55 |
| 0.0007 | 3.00 | 36000 | 0.1912 | 15.72 |
| 0.0017 | 3.00 | 38000 | 0.6397 | 24.51 |
| 0.0024 | 3.01 | 40000 | 0.1851 | 13.44 |
| 0.0005 | 3.01 | 42000 | 0.2569 | 21.54 |
| 0.0005 | 3.01 | 44000 | 0.5288 | 28.18 |
| 0.0050 | 4.00 | 46000 | 0.2538 | 15.05 |
| 0.0026 | 4.00 | 48000 | 0.0993 | 10.70 |
| 0.0009 | 4.00 | 50000 | 0.5376 | 23.57 |
| 0.0010 | 4.01 | 52000 | 0.4009 | 21.67 |
| 0.0018 | 4.01 | 54000 | 0.2099 | 14.74 |
| 0.0016 | 4.01 | 56000 | 0.1439 | 13.13 |
| 0.0107 | 5.00 | 58000 | 0.0643 | 7.68 |
| 0.0011 | 5.00 | 60000 | 0.1293 | 11.51 |
| 0.0009 | 5.01 | 62000 | 0.0721 | 8.04 |
| 0.0008 | 5.01 | 64000 | 0.3456 | 24.58 |
| 0.0007 | 5.01 | 66000 | 0.1930 | 16.79 |
| 0.0005 | 5.01 | 68000 | 0.1542 | 12.18 |
| 0.0009 | 6.00 | 70000 | 0.1657 | 13.00 |
| 0.0004 | 6.00 | 72000 | 0.1262 | 11.16 |
| 0.0004 | 6.01 | 74000 | 0.2233 | 12.73 |
| 0.0010 | 6.01 | 76000 | 0.1117 | 11.79 |
| 0.0021 | 6.01 | 78000 | 0.3011 | 24.35 |
| 0.0014 | 6.01 | 80000 | 0.1536 | 14.13 |
| 0.0010 | 7.00 | 82000 | 0.0863 | 7.93 |
| 0.0014 | 7.00 | 84000 | 0.2631 | 16.91 |
| 0.0003 | 7.01 | 86000 | 0.1333 | 10.72 |
| 0.0004 | 7.01 | 88000 | 0.1723 | 16.66 |
| 0.0008 | 7.01 | 90000 | 0.2139 | 19.11 |
| 0.0047 | 8.00 | 92000 | 0.0988 | 8.88 |
| 0.0003 | 8.00 | 94000 | 0.0784 | 7.12 |
| 0.0004 | 8.00 | 96000 | 0.2343 | 17.37 |
| 0.0019 | 8.01 | 98000 | 0.2397 | 18.74 |
| 0.0010 | 8.01 | 100000 | 0.1677 | 12.29 |
| 0.0004 | 8.01 | 102000 | 0.1551 | 14.36 |
| 0.0013 | 9.00 | 104000 | 0.1314 | 11.37 |
| 0.0003 | 9.00 | 106000 | 0.1554 | 9.61 |
| 0.0004 | 9.01 | 108000 | 0.0906 | 9.04 |
| 0.0001 | 9.01 | 110000 | 0.6560 | 34.02 |
| 0.0009 | 9.01 | 112000 | 0.2301 | 17.58 |
| 0.0007 | 9.01 | 114000 | 0.2159 | 14.63 |
| 0.0007 | 10.00 | 116000 | 0.1608 | 10.86 |
| 0.0005 | 10.00 | 118000 | 0.0831 | 8.62 |
| 0.0005 | 10.01 | 120000 | 0.1421 | 9.19 |
| 0.0004 | 10.01 | 122000 | 0.1187 | 10.68 |
| 0.0003 | 10.01 | 124000 | 0.4213 | 25.16 |
| 0.0006 | 10.01 | 126000 | 0.2728 | 16.96 |
| 0.0002 | 11.00 | 128000 | 0.0876 | 9.04 |
| 0.0008 | 11.00 | 130000 | 0.1947 | 16.94 |
| 0.0005 | 11.01 | 132000 | 0.0990 | 8.75 |
| 0.0008 | 11.01 | 134000 | 0.1164 | 8.94 |
| 0.0004 | 11.01 | 136000 | 0.1203 | 12.85 |
| 0.0019 | 12.00 | 138000 | 0.0438 | 4.48 |
| 0.0003 | 12.00 | 140000 | 0.1088 | 8.65 |
| 0.0004 | 12.00 | 142000 | 0.1215 | 9.92 |
| 0.0015 | 12.01 | 144000 | 0.2885 | 21.79 |
| 0.0014 | 12.01 | 146000 | 0.1768 | 12.10 |
| 0.0004 | 12.01 | 148000 | 0.1216 | 10.13 |
| 0.0013 | 13.00 | 150000 | 0.1339 | 10.36 |
| 0.0017 | 13.00 | 152000 | 0.1112 | 8.96 |
| 0.0001 | 13.01 | 154000 | 0.0948 | 7.98 |
| 0.0002 | 13.01 | 156000 | 0.3108 | 20.68 |
| 0.0008 | 13.01 | 158000 | 0.1587 | 15.30 |
| 0.0015 | 13.01 | 160000 | 0.1346 | 10.93 |
| 0.0005 | 14.00 | 162000 | 0.1653 | 13.21 |
| 0.0005 | 14.00 | 164000 | 0.1019 | 11.03 |
| 0.0006 | 14.01 | 166000 | 0.1058 | 8.35 |
| 0.0002 | 14.01 | 168000 | 0.1135 | 10.51 |
| 0.0002 | 14.01 | 170000 | 0.2589 | 21.16 |
| 0.0010 | 15.00 | 172000 | 0.0872 | 7.39 |
| 0.0002 | 15.00 | 174000 | 0.0600 | 6.66 |
| 0.0007 | 15.00 | 176000 | 0.4865 | 31.15 |
| 0.0011 | 15.01 | 178000 | 0.2016 | 15.32 |
| 0.0005 | 15.01 | 180000 | 0.1639 | 10.70 |
| 0.0006 | 15.01 | 182000 | 0.1186 | 12.50 |
| 0.0006 | 16.00 | 184000 | 0.1166 | 9.92 |
| 0.0005 | 16.00 | 186000 | 0.1155 | 7.33 |
| 0.0004 | 16.01 | 188000 | 0.0656 | 6.72 |
| 0.0008 | 16.01 | 190000 | 0.2959 | 17.06 |
| 0.0002 | 16.01 | 192000 | 0.1560 | 12.60 |
| 0.0005 | 16.01 | 194000 | 0.2069 | 12.79 |
| 0.0015 | 17.00 | 196000 | 0.1045 | 8.83 |
| 0.0002 | 17.00 | 198000 | 0.1018 | 8.73 |
| 0.0003 | 17.01 | 200000 | 0.1292 | 7.20 |
| 0.0009 | 17.01 | 202000 | 0.0931 | 9.25 |
| 0.0019 | 17.01 | 204000 | 0.1964 | 17.42 |
| 0.0013 | 17.01 | 206000 | 0.0973 | 7.10 |
| 0.0007 | 18.00 | 208000 | 0.0941 | 7.79 |
| 0.0003 | 18.00 | 210000 | 0.1350 | 11.12 |
| 0.0001 | 18.01 | 212000 | 0.1246 | 8.33 |
| 0.0002 | 18.01 | 214000 | 0.1008 | 10.11 |
| 0.0001 | 18.01 | 216000 | 0.1457 | 12.60 |
| 0.0013 | 19.00 | 218000 | 0.0435 | 4.33 |
| 0.0002 | 19.00 | 220000 | 0.0605 | 5.19 |
| 0.0003 | 19.00 | 222000 | 0.2734 | 18.36 |
| 0.0003 | 19.01 | 224000 | 0.2369 | 15.24 |
| 0.0001 | 19.01 | 226000 | 0.0959 | 6.91 |
| 0.0003 | 19.01 | 228000 | 0.0936 | 7.28 |
| 0.0008 | 20.00 | 230000 | 0.0783 | 6.45 |
| 0.0002 | 20.00 | 232000 | 0.1215 | 9.19 |
| 0.0002 | 20.01 | 234000 | 0.0851 | 8.71 |
| 0.0001 | 20.01 | 236000 | 0.3519 | 22.84 |
| 0.0003 | 20.01 | 238000 | 0.1444 | 12.20 |
| 0.0005 | 20.01 | 240000 | 0.1581 | 9.67 |
| 0.0003 | 21.00 | 242000 | 0.1343 | 9.57 |
| 0.0003 | 21.00 | 244000 | 0.1086 | 7.72 |
| 0.0002 | 21.01 | 246000 | 0.1358 | 7.54 |
| 0.0002 | 21.01 | 248000 | 0.0717 | 6.30 |
| 0.0004 | 21.01 | 250000 | 0.1298 | 10.74 |
| 0.0001 | 21.01 | 252000 | 0.1443 | 9.32 |
| 0.0003 | 22.00 | 254000 | 0.0451 | 4.10 |
| 0.0002 | 22.00 | 256000 | 0.1284 | 10.82 |
| 0.0001 | 22.01 | 258000 | 0.1014 | 7.26 |
| 0.0005 | 22.01 | 260000 | 0.1175 | 7.58 |
| 0.0002 | 22.01 | 262000 | 0.0875 | 7.64 |
| 0.0006 | 23.00 | 264000 | 0.0402 | 3.81 |
| 0.0001 | 23.00 | 266000 | 0.0462 | 5.05 |
| 0.0002 | 23.00 | 268000 | 0.0650 | 7.98 |
| 0.0007 | 23.01 | 270000 | 0.1429 | 12.75 |
| 0.0002 | 23.01 | 272000 | 0.0977 | 7.75 |
| 0.0001 | 23.01 | 274000 | 0.0982 | 8.52 |
| 0.0005 | 24.00 | 276000 | 0.0998 | 7.05 |
| 0.0002 | 24.00 | 278000 | 0.1020 | 7.75 |
| 0.0001 | 24.01 | 280000 | 0.0735 | 6.64 |
| 0.0002 | 24.01 | 282000 | 0.3529 | 19.78 |
| 0.0003 | 24.01 | 284000 | 0.1658 | 14.15 |
| 0.0001 | 24.01 | 286000 | 0.1560 | 11.45 |
| 0.0002 | 25.00 | 288000 | 0.1662 | 10.49 |
| 0.0004 | 25.00 | 290000 | 0.1091 | 10.30 |
| 0.0001 | 25.01 | 292000 | 0.1403 | 9.94 |
| 0.0002 | 25.01 | 294000 | 0.1119 | 8.92 |
| 0.0000 | 25.01 | 296000 | 0.3880 | 22.00 |
| 0.0002 | 26.00 | 298000 | 0.0605 | 4.67 |
| 0.0000 | 26.00 | 300000 | 0.0621 | 4.92 |
| 0.0003 | 26.00 | 302000 | 0.2317 | 13.61 |
| 0.0002 | 26.01 | 304000 | 0.0863 | 6.93 |
| 0.0005 | 26.01 | 306000 | 0.0940 | 6.74 |
| 0.0006 | 26.01 | 308000 | 0.0879 | 8.10 |
| 0.0001 | 27.00 | 310000 | 0.0515 | 4.14 |
| 0.0001 | 27.00 | 312000 | 0.0680 | 4.42 |
| 0.0000 | 27.01 | 314000 | 0.0987 | 8.14 |
| 0.0005 | 27.01 | 316000 | 0.3038 | 16.45 |
| 0.0000 | 27.01 | 318000 | 0.0865 | 6.36 |
| 0.0003 | 27.01 | 320000 | 0.1186 | 7.60 |
| 0.0004 | 28.00 | 322000 | 0.1314 | 8.14 |
| 0.0000 | 28.00 | 324000 | 0.0978 | 6.28 |
| 0.0001 | 28.01 | 326000 | 0.1021 | 7.26 |
| 0.0007 | 28.01 | 328000 | 0.1285 | 10.45 |
| 0.0006 | 28.01 | 330000 | 0.1283 | 10.91 |
| 0.0003 | 28.01 | 332000 | 0.1309 | 9.92 |
| 0.0002 | 29.00 | 334000 | 0.1114 | 9.09 |
| 0.0006 | 29.00 | 336000 | 0.1049 | 9.48 |
| 0.0000 | 29.01 | 338000 | 0.0879 | 7.08 |
| 0.0001 | 29.01 | 340000 | 0.0644 | 5.57 |
| 0.0004 | 29.01 | 342000 | 0.1470 | 10.53 |
| 0.0003 | 30.00 | 344000 | 0.0425 | 3.39 |
| 0.0000 | 30.00 | 346000 | 0.0358 | 3.22 |
| 0.0002 | 30.00 | 348000 | 0.2155 | 13.50 |
| 0.0002 | 30.01 | 350000 | 0.1227 | 10.49 |
| 0.0001 | 30.01 | 352000 | 0.1400 | 7.77 |
| 0.0033 | 30.01 | 354000 | 0.1205 | 10.40 |
| 0.0001 | 31.00 | 356000 | 0.0440 | 3.39 |
| 0.0002 | 31.00 | 358000 | 0.0825 | 5.44 |
| 0.0002 | 31.01 | 360000 | 0.0743 | 7.77 |
| 0.0004 | 31.01 | 362000 | 0.2200 | 15.57 |
| 0.0002 | 31.01 | 364000 | 0.1102 | 8.39 |
| 0.0001 | 31.01 | 366000 | 0.1132 | 7.81 |
| 0.0003 | 32.00 | 368000 | 0.1195 | 8.92 |
| 0.0001 | 32.00 | 370000 | 0.0605 | 4.67 |
| 0.0000 | 32.01 | 372000 | 0.0545 | 4.31 |
| 0.0003 | 32.01 | 374000 | 0.1234 | 10.55 |
| 0.0001 | 32.01 | 376000 | 0.0810 | 8.04 |
| 0.0001 | 32.01 | 378000 | 0.1075 | 7.14 |
| 0.0004 | 33.00 | 380000 | 0.0766 | 6.05 |
| 0.0005 | 33.00 | 382000 | 0.0983 | 8.42 |
| 0.0000 | 33.01 | 384000 | 0.0772 | 5.69 |
| 0.0002 | 33.01 | 386000 | 0.0823 | 6.89 |
| 0.0004 | 33.01 | 388000 | 0.0938 | 8.33 |
| 0.0001 | 34.00 | 390000 | 0.0531 | 3.75 |
| 0.0003 | 34.00 | 392000 | 0.0452 | 3.43 |
| 0.0004 | 34.00 | 394000 | 0.1294 | 11.22 |
| 0.0004 | 34.01 | 396000 | 0.1213 | 10.17 |
| 0.0000 | 34.01 | 398000 | 0.1238 | 8.77 |
| 0.0004 | 34.01 | 400000 | 0.0922 | 6.09 |
| 0.0003 | 35.00 | 402000 | 0.0613 | 4.73 |
| 0.0000 | 35.00 | 404000 | 0.0533 | 3.18 |
| 0.0001 | 35.01 | 406000 | 0.0726 | 6.26 |
| 0.0002 | 35.01 | 408000 | 0.2262 | 13.33 |
| 0.0002 | 35.01 | 410000 | 0.0819 | 7.35 |
| 0.0000 | 35.01 | 412000 | 0.0978 | 6.85 |
| 0.0001 | 36.00 | 414000 | 0.1319 | 8.42 |
| 0.0001 | 36.00 | 416000 | 0.0543 | 4.31 |
| 0.0002 | 36.01 | 418000 | 0.0757 | 5.57 |
| 0.0001 | 36.01 | 420000 | 0.0819 | 7.62 |
| 0.0001 | 36.01 | 422000 | 0.1564 | 10.95 |
| 0.0001 | 37.00 | 424000 | 0.0912 | 6.49 |
| 0.0003 | 37.00 | 426000 | 0.0702 | 5.32 |
| 0.0004 | 37.00 | 428000 | 0.1477 | 9.02 |
| 0.0000 | 37.01 | 430000 | 0.0772 | 6.18 |
| 0.0001 | 37.01 | 432000 | 0.0775 | 6.47 |
| 0.0002 | 37.01 | 434000 | 0.0546 | 5.00 |
| 0.0000 | 38.00 | 436000 | 0.0444 | 3.27 |
| 0.0001 | 38.00 | 438000 | 0.0380 | 2.85 |
| 0.0005 | 38.01 | 440000 | 0.1071 | 8.73 |
| 0.0003 | 38.01 | 442000 | 0.1291 | 10.03 |
| 0.0000 | 38.01 | 444000 | 0.0772 | 6.18 |
| 0.0001 | 38.01 | 446000 | 0.0799 | 6.28 |
| 0.0001 | 39.00 | 448000 | 0.0480 | 3.56 |
| 0.0000 | 57.01 | 658000 | 0.0630 | 3.75 |
| 0.0001 | 57.01 | 660000 | 0.0610 | 3.73 |
| 0.0000 | 57.01 | 662000 | 0.0430 | 2.72 |
| 0.0006 | 57.01 | 664000 | 0.0494 | 2.87 |
| 0.0000 | 58.00 | 666000 | 0.0523 | 2.95 |
| 0.0003 | 58.00 | 668000 | 0.0455 | 2.78 |
| 0.0001 | 58.01 | 670000 | 0.0379 | 2.43 |
| 0.0000 | 58.01 | 672000 | 0.0588 | 3.64 |
| 0.0000 | 58.01 | 674000 | 0.0365 | 2.34 |
| 0.0000 | 58.01 | 676000 | 0.0395 | 2.60 |
| 0.0000 | 59.00 | 678000 | 0.0662 | 3.77 |
| 0.0000 | 59.00 | 680000 | 0.0376 | 2.34 |
| 0.0000 | 59.01 | 682000 | 0.0406 | 2.34 |
| 0.0003 | 59.01 | 684000 | 0.0385 | 2.22 |
| 0.0001 | 59.01 | 686000 | 0.0551 | 3.18 |
| 0.0000 | 60.00 | 688000 | 0.0409 | 2.72 |
| 0.0001 | 60.00 | 690000 | 0.0397 | 2.32 |
| 0.0001 | 60.00 | 692000 | 0.0471 | 3.31 |
| 0.0001 | 60.01 | 694000 | 0.0348 | 2.16 |
| 0.0000 | 60.01 | 696000 | 0.0338 | 2.22 |
| 0.0000 | 60.01 | 698000 | 0.0358 | 2.30 |
| 0.0000 | 61.00 | 700000 | 0.0376 | 2.24 |
| 0.0000 | 61.00 | 702000 | 0.0386 | 2.41 |
| 0.0000 | 61.01 | 704000 | 0.0429 | 2.60 |
| 0.0002 | 61.01 | 706000 | 0.0675 | 3.94 |
| 0.0000 | 61.01 | 708000 | 0.0381 | 2.47 |
| 0.0000 | 61.01 | 710000 | 0.0419 | 2.72 |
| 0.0001 | 62.00 | 712000 | 0.0607 | 3.54 |
| 0.0000 | 62.00 | 714000 | 0.0379 | 2.22 |
| 0.0000 | 62.01 | 716000 | 0.0412 | 2.60 |
| 0.0008 | 62.01 | 718000 | 0.0753 | 4.00 |
| 0.0001 | 62.01 | 720000 | 0.0420 | 2.45 |
| 0.0000 | 63.00 | 722000 | 0.0385 | 2.30 |
| 0.0000 | 63.00 | 724000 | 0.0563 | 2.99 |
| 0.0000 | 63.00 | 726000 | 0.0358 | 2.18 |
| 0.0000 | 63.01 | 728000 | 0.0337 | 2.14 |
| 0.0001 | 63.01 | 730000 | 0.0351 | 2.26 |
| 0.0000 | 63.01 | 732000 | 0.0408 | 2.60 |
| 0.0000 | 64.00 | 734000 | 0.0339 | 2.05 |
| 0.0001 | 64.00 | 736000 | 0.0373 | 2.14 |
| 0.0000 | 64.01 | 738000 | 0.0566 | 3.37 |
| 0.0000 | 64.01 | 740000 | 0.0374 | 2.41 |
| 0.0000 | 64.01 | 742000 | 0.0350 | 2.20 |
| 0.0000 | 64.01 | 744000 | 0.0354 | 2.24 |
| 0.0000 | 65.00 | 746000 | 0.0341 | 2.16 |
| 0.0000 | 65.00 | 748000 | 0.0366 | 2.37 |
| 0.0001 | 65.01 | 750000 | 0.0459 | 2.57 |
| 0.0001 | 65.01 | 752000 | 0.0494 | 2.76 |
| 0.0000 | 65.01 | 754000 | 0.0333 | 1.99 |
| 0.0000 | 65.01 | 756000 | 0.0345 | 1.99 |
| 0.0000 | 66.00 | 758000 | 0.0401 | 2.32 |
| 0.0001 | 66.00 | 760000 | 0.0315 | 1.82 |
| 0.0000 | 66.01 | 762000 | 0.0365 | 1.90 |
| 0.0000 | 66.01 | 764000 | 0.0446 | 2.55 |
| 0.0000 | 66.01 | 766000 | 0.0370 | 2.11 |
| 0.0000 | 67.00 | 768000 | 0.0322 | 1.90 |
| 0.0000 | 67.00 | 770000 | 0.0394 | 2.18 |
| 0.0001 | 67.00 | 772000 | 0.0437 | 2.60 |
| 0.0000 | 67.01 | 774000 | 0.0334 | 1.95 |
| 0.0000 | 67.01 | 776000 | 0.0363 | 2.14 |
| 0.0000 | 67.01 | 778000 | 0.0368 | 2.16 |
| 0.0000 | 68.00 | 780000 | 0.0315 | 1.86 |
| 0.0000 | 68.00 | 782000 | 0.0409 | 2.28 |
| 0.0001 | 68.01 | 784000 | 0.0441 | 2.53 |
| 0.0000 | 68.01 | 786000 | 0.0380 | 2.26 |
| 0.0000 | 68.01 | 788000 | 0.0384 | 2.20 |
| 0.0000 | 68.01 | 790000 | 0.0372 | 2.18 |
| 0.0000 | 69.00 | 792000 | 0.0374 | 2.26 |
| 0.0000 | 69.00 | 794000 | 0.0357 | 2.20 |
| 0.0000 | 69.01 | 796000 | 0.0415 | 2.47 |
| 0.0000 | 69.01 | 798000 | 0.0439 | 2.60 |
| 0.0000 | 69.01 | 800000 | 0.0411 | 2.24 |
| 0.0002 | 69.01 | 802000 | 0.0416 | 2.32 |
| 0.0000 | 70.00 | 804000 | 0.0395 | 2.30 |
| 0.0000 | 70.00 | 806000 | 0.0352 | 2.09 |
| 0.0001 | 70.01 | 808000 | 0.0353 | 2.07 |
| 0.0000 | 70.01 | 810000 | 0.0387 | 2.03 |
| 0.0000 | 70.01 | 812000 | 0.0387 | 2.07 |
| 0.0000 | 71.00 | 814000 | 0.0370 | 2.14 |
| 0.0000 | 71.00 | 816000 | 0.0400 | 2.22 |
| 0.0001 | 71.00 | 818000 | 0.0458 | 2.64 |
| 0.0000 | 71.01 | 820000 | 0.0376 | 2.09 |
| 0.0000 | 71.01 | 822000 | 0.0386 | 2.18 |
| 0.0000 | 71.01 | 824000 | 0.0385 | 2.16 |
| 0.0000 | 72.00 | 826000 | 0.0369 | 2.14 |
| 0.0000 | 72.00 | 828000 | 0.0405 | 2.18 |
| 0.0000 | 72.01 | 830000 | 0.0474 | 2.57 |
| 0.0000 | 72.01 | 832000 | 0.0484 | 2.68 |
| 0.0000 | 72.01 | 834000 | 0.0445 | 2.53 |
| 0.0000 | 72.01 | 836000 | 0.0444 | 2.51 |
| 0.0000 | 73.00 | 838000 | 0.0447 | 2.55 |
| 0.0000 | 73.00 | 840000 | 0.0411 | 2.45 |
| 0.0000 | 73.01 | 842000 | 0.0413 | 2.49 |
| 0.0000 | 73.01 | 844000 | 0.0430 | 2.43 |
| 0.0000 | 73.01 | 846000 | 0.0409 | 2.37 |
| 0.0000 | 74.00 | 848000 | 0.0399 | 2.39 |
| 0.0000 | 74.00 | 850000 | 0.0425 | 2.47 |
| 0.0000 | 74.00 | 852000 | 0.0390 | 2.24 |
| 0.0000 | 74.01 | 854000 | 0.0392 | 2.28 |
| 0.0000 | 74.01 | 856000 | 0.0410 | 2.30 |
| 0.0000 | 74.01 | 858000 | 0.0409 | 2.30 |
| 0.0000 | 75.00 | 860000 | 0.0393 | 2.26 |
| 0.0000 | 75.00 | 862000 | 0.0429 | 2.47 |
| 0.0000 | 75.01 | 864000 | 0.0426 | 2.43 |
| 0.0000 | 75.01 | 866000 | 0.0421 | 2.45 |
| 0.0000 | 75.01 | 868000 | 0.0432 | 2.47 |
| 0.0000 | 75.01 | 870000 | 0.0425 | 2.45 |
| 0.0000 | 76.00 | 872000 | 0.0423 | 2.45 |
| 0.0000 | 76.00 | 874000 | 0.0423 | 2.43 |
| 0.0000 | 76.01 | 876000 | 0.0423 | 2.45 |
| 0.0000 | 76.01 | 878000 | 0.0423 | 2.41 |
| 0.0000 | 76.01 | 880000 | 0.0422 | 2.41 |
| 0.0000 | 76.01 | 882000 | 0.0422 | 2.37 |
| 0.0000 | 77.00 | 884000 | 0.0415 | 2.37 |
| 0.0000 | 77.00 | 886000 | 0.0405 | 2.32 |
| 0.0000 | 77.01 | 888000 | 0.0405 | 2.32 |
| 0.0000 | 77.01 | 890000 | 0.0405 | 2.32 |
| 0.0000 | 77.01 | 892000 | 0.0406 | 2.32 |
| 0.0000 | 78.00 | 894000 | 0.0406 | 2.34 |
| 0.0000 | 78.00 | 896000 | 0.0405 | 2.32 |
| 0.0000 | 78.00 | 898000 | 0.0405 | 2.34 |
| 0.0000 | 78.01 | 900000 | 0.0405 | 2.34 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
bokanamo/blockassist-bc-huge_lumbering_toad_1756112169
|
bokanamo
| 2025-08-25T08:57:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge lumbering toad",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:57:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge lumbering toad
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756112119
|
eusuf01
| 2025-08-25T08:55:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:55:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ChenWu98/numina_qwen_2.5_sft_identical_split_random_weighted_alpha3.0_1
|
ChenWu98
| 2025-08-25T08:55:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T08:54:03Z |
---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: numina_qwen_2.5_sft_identical_split_random_weighted_alpha3.0_1
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for numina_qwen_2.5_sft_identical_split_random_weighted_alpha3.0_1
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/qohkms55)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756110902
|
Sayemahsjn
| 2025-08-25T08:54:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:54:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aleebaster/blockassist-bc-sly_eager_boar_1756110500
|
aleebaster
| 2025-08-25T08:54:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:54:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zavodman332/blockassist-bc-sharp_aquatic_hare_1756111940
|
zavodman332
| 2025-08-25T08:52:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sharp aquatic hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:52:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sharp aquatic hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ricodr/blockassist-bc-twitchy_toothy_clam_1756111819
|
ricodr
| 2025-08-25T08:51:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"twitchy toothy clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:51:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- twitchy toothy clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thaymanhinhsamsung24h/tiem-thay-man-hinh-samsung-a73-gia-re
|
thaymanhinhsamsung24h
| 2025-08-25T08:51:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-25T08:50:43Z |
<h1><strong>Tiα»m Thay Màn Hình Samsung A73 5G Giá RαΊ» TPHCM – Dα»ch Vα»₯ Chuyên Nghiα»p TαΊ‘i Bα»nh Viα»n Δiα»n ThoαΊ‘i, Laptop 24h</strong></h1>
<p>Khi màn hình Samsung A73 5G cα»§a bαΊ‘n gαΊ·p sα»± cα», viα»c tìm kiαΊΏm mα»t <a href="https://chamsocdidong.com/thay-man-hinh-samsung-galaxy-a73-ds16940" target="_blank">tiα»m thay màn hình Samsung A73 5G giá rαΊ» TPHCM</a> là Δiα»u cαΊ§n thiαΊΏt. TαΊ‘i <strong>Bα»nh Viα»n Δiα»n ThoαΊ‘i, Laptop 24h</strong>, chúng tôi cung cαΊ₯p dα»ch vα»₯ thay màn hình Samsung chính hãng vα»i mα»©c giá hợp lý, bαΊ£o ΔαΊ£m chαΊ₯t lượng và mang lαΊ‘i hiα»u quαΊ£ cao. Hãy cùng tìm hiα»u chi tiαΊΏt vα» dα»ch vα»₯ thay màn hình Samsung tαΊ‘i Bα»nh Viα»n Δiα»n ThoαΊ‘i, Laptop 24h.</p>
<p style="text-align: center;"><img src="https://chamsocdidong.com/upload_images/images/thay-man-hinh-samsung-a73-5g/thay-man-hinh-samsung-a73.jpg" alt="" /></p>
<h3>Khi Nào CαΊ§n Thay Màn Hình Samsung?</h3>
<p>Màn hình là mα»t trong nhα»―ng bα» phαΊn quan trα»ng nhαΊ₯t cα»§a chiαΊΏc Δiα»n thoαΊ‘i Samsung. Khi màn hình cα»§a bαΊ‘n bα» hα»ng, nó có thα» gây αΊ£nh hΖ°α»ng lα»n ΔαΊΏn trαΊ£i nghiα»m sα» dα»₯ng. DΖ°α»i Δây là nhα»―ng dαΊ₯u hiα»u cho thαΊ₯y bαΊ‘n cαΊ§n phαΊ£i ΔαΊΏn <a href="https://issuu.com/thaymanhinhsamsung24h" target="_blank">cα»a hàng thay màn hình Samsung giá rαΊ»</a> Δα» thay thαΊΏ màn hình:</p>
<ol>
<li>
<p><strong>Màn hình bα» vα»‘ hoαΊ·c nα»©t</strong>: Δây là dαΊ₯u hiα»u rõ ràng nhαΊ₯t khi bαΊ‘n cαΊ§n phαΊ£i thay màn hình. Sau khi Δiα»n thoαΊ‘i bα» rΖ‘i hoαΊ·c va ΔαΊp mαΊ‘nh, màn hình có thα» bα» vα»‘ hoαΊ·c nα»©t. NαΊΏu tình trαΊ‘ng này xαΊ£y ra, viα»c thay màn hình mα»i là cαΊ§n thiαΊΏt Δα» bαΊ£o vα» Δiα»n thoαΊ‘i và tiαΊΏp tα»₯c sα» dα»₯ng mα»t cách an toàn.</p>
</li>
<li>
<p><strong>Màn hình không hiα»n thα» hoαΊ·c hiα»n thα» mα»</strong>: Màn hình Samsung cα»§a bαΊ‘n không hiα»n thα» hình αΊ£nh hoαΊ·c có hình αΊ£nh mα», nhòe là dαΊ₯u hiα»u rõ rα»t cho thαΊ₯y màn hình Δã bα» hΖ° hα»ng. Thay màn hình sαΊ½ giúp bαΊ‘n phα»₯c hα»i lαΊ‘i chαΊ₯t lượng hiα»n thα» ban ΔαΊ§u.</p>
</li>
<li>
<p><strong>CαΊ£m α»©ng không phαΊ£n hα»i</strong>: NαΊΏu màn hình không nhαΊn cαΊ£m α»©ng hoαΊ·c cαΊ£m α»©ng bα» trα»
, Δiα»u này cho thαΊ₯y màn hình cα»§a bαΊ‘n Δã bα» lα»i. <strong>Thay màn hình Samsung</strong> là giαΊ£i pháp tα»t nhαΊ₯t Δα» khαΊ―c phα»₯c tình trαΊ‘ng này.</p>
</li>
<li>
<p><strong>Màn hình xuαΊ₯t hiα»n các vαΊΏt loang mα»±c hoαΊ·c vαΊΏt Δen</strong>: Nhα»―ng vαΊΏt Δen hoαΊ·c vαΊΏt loang mα»±c trên màn hình không chα» làm giαΊ£m tính thαΊ©m mα»Ή mà còn αΊ£nh hΖ°α»ng ΔαΊΏn khαΊ£ nΔng sα» dα»₯ng. Δây là dαΊ₯u hiα»u cho thαΊ₯y bαΊ‘n cαΊ§n thay màn hình Samsung mα»i.</p>
</li>
<li>
<p><strong>Màn hình sai màu hoαΊ·c Δα» sáng không Δα»u</strong>: Khi màn hình cα»§a bαΊ‘n hiα»n thα» màu sαΊ―c không chính xác hoαΊ·c Δα» sáng không Δα»ng Δα»u, viα»c thay màn hình chính hãng sαΊ½ giúp bαΊ‘n có mα»t trαΊ£i nghiα»m tα»t hΖ‘n.</p>
</li>
</ol>
<p>NαΊΏu bαΊ‘n gαΊ·p phαΊ£i bαΊ₯t kα»³ vαΊ₯n Δα» nào trên, hãy ΔαΊΏn ngay <strong>cα»a hàng thay màn hình Samsung giá rαΊ»</strong> tαΊ‘i <strong>Bα»nh Viα»n Δiα»n ThoαΊ‘i, Laptop 24h</strong> Δα» Δược kiα»m tra và thay màn hình chính hãng.</p>
<h3>Δα»a Chα» Thay Màn Hình Samsung Chính Hãng Giá RαΊ»</h3>
<p>Khi tìm kiαΊΏm mα»t <strong>Δα»a chα» thay màn hình Samsung chính hãng giá rαΊ»</strong>, <strong>Bα»nh Viα»n Δiα»n ThoαΊ‘i, Laptop 24h</strong> là lα»±a chα»n Δáng tin cαΊy. Chúng tôi cam kαΊΏt mang ΔαΊΏn dα»ch vα»₯ thay màn hình Samsung vα»i màn hình chính hãng, bαΊ£o ΔαΊ£m chαΊ₯t lượng và tính nΔng vượt trα»i. DΖ°α»i Δây là lý do vì sao bαΊ‘n nên chα»n chúng tôi:</p>
<ul>
<li>
<p><strong>Màn hình chính hãng Samsung</strong>: Chúng tôi chα» sα» dα»₯ng màn hình chính hãng tα»« Samsung, giúp ΔαΊ£m bαΊ£o Δiα»n thoαΊ‘i cα»§a bαΊ‘n hoαΊ‘t Δα»ng α»n Δα»nh và không gαΊ·p phαΊ£i các sα»± cα» vα» màn hình sau khi thay thαΊΏ.</p>
</li>
<li>
<p><strong>Giá cαΊ£ hợp lý và minh bαΊ‘ch</strong>: Dα»ch vα»₯ thay màn hình Samsung tαΊ‘i <strong>Bα»nh Viα»n Δiα»n ThoαΊ‘i, Laptop 24h</strong> có giá cαΊ£ hợp lý, phù hợp vα»i nhu cαΊ§u cα»§a khách hàng. Chúng tôi cam kαΊΏt cung cαΊ₯p mα»©c giá minh bαΊ‘ch, không có chi phí αΊ©n.</p>
</li>
<li>
<p><strong>Thα»i gian thay màn hình nhanh chóng</strong>: Chúng tôi hiα»u rαΊ±ng bαΊ‘n cαΊ§n sα» dα»₯ng Δiα»n thoαΊ‘i ngay, vì vαΊy quá trình thay màn hình sαΊ½ Δược thα»±c hiα»n trong thα»i gian nhanh nhαΊ₯t, thΖ°α»ng chα» trong khoαΊ£ng 1-2 giα».</p>
</li>
<li>
<p><strong>BαΊ£o hành dài hαΊ‘n</strong>: Sau khi thay màn hình, bαΊ‘n sαΊ½ nhαΊn Δược chαΊΏ Δα» bαΊ£o hành dài hαΊ‘n, giúp bαΊ‘n yên tâm sα» dα»₯ng Δiα»n thoαΊ‘i mà không lo vα» chαΊ₯t lượng màn hình.</p>
</li>
</ul>
<p>Vα»i dα»ch vα»₯ chαΊ₯t lượng và giá cαΊ£ hợp lý, <strong>Bα»nh Viα»n Δiα»n ThoαΊ‘i, Laptop 24h</strong> là Δα»a chα» tin cαΊy Δα» thay màn hình Samsung chính hãng tαΊ‘i TPHCM.</p>
<p style="text-align: center;"><img src="https://chamsocdidong.com/upload_images/images/thay-man-hinh-samsung-a73-5g/truoc-va-sau-khi-thay-man-hinh-samsung-A73(1).jpg" alt="" /></p>
<h3>Thay Màn Hình Samsung Có αΊ’nh HΖ°α»ng Gì ΔαΊΏn Máy Không?</h3>
<p>Mα»t trong nhα»―ng mα»i lo ngαΊ‘i cα»§a nhiα»u ngΖ°α»i khi thay màn hình Samsung là liα»u viα»c thay thαΊΏ có αΊ£nh hΖ°α»ng ΔαΊΏn các bα» phαΊn khác trong máy hay không. Tuy nhiên, nαΊΏu bαΊ‘n chα»n dα»ch vα»₯ thay màn hình tαΊ‘i <strong>Bα»nh Viα»n Δiα»n ThoαΊ‘i, Laptop 24h</strong>, bαΊ‘n hoàn toàn có thα» yên tâm.</p>
<p><strong>Lý do tαΊ‘i sao thay màn hình không αΊ£nh hΖ°α»ng ΔαΊΏn máy</strong>:</p>
<ol>
<li>
<p><strong>Sα» dα»₯ng màn hình chính hãng</strong>: Chúng tôi chα» sα» dα»₯ng màn hình chính hãng tα»« Samsung, ΔαΊ£m bαΊ£o tính tΖ°Ζ‘ng thích vα»i các linh kiα»n khác cα»§a Δiα»n thoαΊ‘i. Viα»c thay màn hình chính hãng giúp máy hoαΊ‘t Δα»ng α»n Δα»nh mà không gây αΊ£nh hΖ°α»ng ΔαΊΏn các bα» phαΊn khác.</p>
</li>
<li>
<p><strong>Δα»i ngΕ© kα»Ή thuαΊt viên chuyên nghiα»p</strong>: Các kα»Ή thuαΊt viên cα»§a <strong>Bα»nh Viα»n Δiα»n ThoαΊ‘i, Laptop 24h</strong> Δα»u có kinh nghiα»m lâu nΔm trong viα»c thay thαΊΏ màn hình Samsung. Hα» sαΊ½ thα»±c hiα»n quá trình thay màn hình mα»t cách cαΊ©n thαΊn và chính xác, không làm αΊ£nh hΖ°α»ng ΔαΊΏn các linh kiα»n khác trong Δiα»n thoαΊ‘i.</p>
</li>
<li>
<p><strong>Kiα»m tra kα»Ή lΖ°α»‘ng sau khi thay màn hình</strong>: Sau khi thay màn hình, chúng tôi sαΊ½ kiα»m tra toàn bα» các tính nΔng cα»§a Δiα»n thoαΊ‘i nhΖ° cαΊ£m α»©ng, hiα»n thα», Δα» sáng Δα» ΔαΊ£m bαΊ£o mα»i thα»© hoαΊ‘t Δα»ng bình thΖ°α»ng.</p>
</li>
</ol>
<p>Vα»i nhα»―ng yαΊΏu tα» trên, bαΊ‘n có thα» yên tâm rαΊ±ng viα»c thay màn hình Samsung tαΊ‘i <strong>Bα»nh Viα»n Δiα»n ThoαΊ‘i, Laptop 24h</strong> sαΊ½ không gây αΊ£nh hΖ°α»ng ΔαΊΏn máy cα»§a bαΊ‘n.</p>
<h3>Bα»nh Viα»n Δiα»n ThoαΊ‘i, Laptop 24h – Sα» Dα»₯ng Màn Hình Chính Hãng Δα» Thay Cho Khách Hàng</h3>
<p><strong>Bα»nh Viα»n Δiα»n ThoαΊ‘i, Laptop 24h</strong> cam kαΊΏt sα» dα»₯ng <strong>màn hình chính hãng Samsung</strong> trong tαΊ₯t cαΊ£ các dα»ch vα»₯ thay màn hình. Viα»c sα» dα»₯ng màn hình chính hãng giúp bαΊ£o vα» Δiα»n thoαΊ‘i cα»§a bαΊ‘n và ΔαΊ£m bαΊ£o mα»i tính nΔng hoαΊ‘t Δα»ng nhΖ° ban ΔαΊ§u.</p>
<p><strong>Các loαΊ‘i màn hình chính hãng mà chúng tôi sα» dα»₯ng</strong>:</p>
<ul>
<li>
<p><strong>Màn hình Super AMOLED</strong>: Δây là công nghα» màn hình cao cαΊ₯p cα»§a Samsung, mang lαΊ‘i màu sαΊ―c sα»ng Δα»ng, Δα» tΖ°Ζ‘ng phαΊ£n cao và tiαΊΏt kiα»m nΔng lượng. Màn hình này Δược sα» dα»₯ng trong các dòng Δiα»n thoαΊ‘i cao cαΊ₯p nhΖ° Galaxy S, Note và A series.</p>
</li>
<li>
<p><strong>Màn hình AMOLED tiêu chuαΊ©n</strong>: Màn hình này thích hợp cho các dòng Δiα»n thoαΊ‘i tαΊ§m trung, mang ΔαΊΏn chαΊ₯t lượng hiα»n thα» sαΊ―c nét và tiαΊΏt kiα»m nΔng lượng.</p>
</li>
<li>
<p><strong>Màn hình LCD</strong>: Màn hình LCD phù hợp vα»i các dòng Δiα»n thoαΊ‘i giá rαΊ», mang lαΊ‘i Δα» sáng cao và hiα»n thα» rõ ràng trong mα»i Δiα»u kiα»n ánh sáng.</p>
</li>
</ul>
<p>Chúng tôi cam kαΊΏt mang ΔαΊΏn cho khách hàng mα»t dα»ch vα»₯ thay màn hình Samsung chαΊ₯t lượng, giúp bαΊ‘n yên tâm sα» dα»₯ng Δiα»n thoαΊ‘i mà không lo gαΊ·p phαΊ£i sα»± cα» vα» màn hình.</p>
<p style="text-align: center;"><img src="https://chamsocdidong.com/upload_images/images/thay-man-hinh-samsung-a73-5g/cam-ket-voi-khach-hang.jpg" alt="" /></p>
<h3>HΖ°α»ng DαΊ«n Sα» Dα»₯ng Dα»ch Vα»₯ TαΊ‘i Bα»nh Viα»n Δiα»n ThoαΊ‘i, Laptop 24h</h3>
<p>Δα» sα» dα»₯ng dα»ch vα»₯ thay màn hình tαΊ‘i <strong>Bα»nh Viα»n Δiα»n ThoαΊ‘i, Laptop 24h</strong>, bαΊ‘n có thα» làm theo các bΖ°α»c sau:</p>
<ol>
<li>
<p><strong>Liên hα» vα»i chúng tôi</strong>: Gα»i Δiα»n ΔαΊΏn hotline hoαΊ·c truy cαΊp website <strong>chamsocdidong.com</strong> Δα» yêu cαΊ§u tΖ° vαΊ₯n hoαΊ·c ΔαΊ·t lα»ch thay màn hình.</p>
</li>
<li>
<p><strong>Mang Δiα»n thoαΊ‘i ΔαΊΏn cα»a hàng</strong>: ΔαΊΏn mα»t trong các chi nhánh cα»§a chúng tôi Δα» kα»Ή thuαΊt viên kiα»m tra và thay màn hình cho bαΊ‘n.</p>
</li>
<li>
<p><strong>Thα»±c hiα»n thay màn hình</strong>: Quá trình thay màn hình sαΊ½ diα»
n ra nhanh chóng, chα» trong khoαΊ£ng 1-2 giα» Δα»ng hα».</p>
</li>
<li>
<p><strong>NhαΊn bαΊ£o hành</strong>: Sau khi thay màn hình, bαΊ‘n sαΊ½ nhαΊn Δược phiαΊΏu bαΊ£o hành chính hãng, giúp bαΊ‘n yên tâm sα» dα»₯ng Δiα»n thoαΊ‘i lâu dài.</p>
</li>
</ol>
<p>Hãy ΔαΊΏn <strong>Bα»nh Viα»n Δiα»n ThoαΊ‘i, Laptop 24h</strong> Δα» trαΊ£i nghiα»m dα»ch vα»₯ thay màn hình Samsung chính hãng, nhanh chóng và giá rαΊ». Chúng tôi luôn sαΊ΅n sàng phα»₯c vα»₯ bαΊ‘n!</p>
|
ankitA2003/blockassist-bc-fishy_dappled_elephant_1756111828
|
ankitA2003
| 2025-08-25T08:51:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy dappled elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:51:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy dappled elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/collage-art-style
|
Muapi
| 2025-08-25T08:50:34Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T08:50:02Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Collage Art Style

**Base model**: Flux.1 D
**Trained words**: hyacinthcollage
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:726166@811998", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/romance-book-cover-ce
|
Muapi
| 2025-08-25T08:49:57Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T08:49:42Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Romance Book Cover - CE

**Base model**: Flux.1 D
**Trained words**: rmcebkCE style
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:747447@835872", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
hoatac/gemma-3n-E2B-Turkish-Medical-QA-Merged
|
hoatac
| 2025-08-25T08:48:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-25T08:18:48Z |
---
base_model: unsloth/gemma-3n-e2b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hoatac
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-e2b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aceail/qwen2-test_250825
|
aceail
| 2025-08-25T08:48:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T01:10:05Z |
---
library_name: transformers
model_name: qwen2-test_250825
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-test_250825
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aceail/qwen2-test_250825", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/aceail-yonsei-university/huggingface/runs/tin5dlkh)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
badaoui/HuggingFaceTB-SmolLM2-135M-Instruct-neuron
|
badaoui
| 2025-08-25T08:48:03Z | 20 | 0 | null |
[
"llama",
"neuron",
"optimized",
"aws-neuron",
"text-generation",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M-Instruct",
"region:us"
] |
text-generation
| 2025-08-22T12:36:16Z |
---
tags:
- neuron
- optimized
- aws-neuron
- text-generation
base_model: HuggingFaceTB/SmolLM2-135M-Instruct
---
# Neuron-Optimized HuggingFaceTB/SmolLM2-135M-Instruct
This repository contains AWS Neuron-optimized files for [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct).
## Model Details
- **Base Model**: [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct)
- **Task**: text-generation
- **Optimization**: AWS Neuron compilation
- **Generated by**: [badaoui](https://huggingface.co/badaoui)
- **Generated using**: [Optimum Neuron Compiler Space](https://huggingface.co/spaces/optimum/neuron-export)
## Usage
This model has been optimized for AWS Neuron devices (Inferentia/Trainium). To use it:
```python
from optimum.neuron import NeuronModelForCausalLM
model = NeuronModelForCausalLM.from_pretrained("badaoui/HuggingFaceTB-SmolLM2-135M-Instruct-neuron")
```
## Performance
These files are pre-compiled for AWS Neuron devices and should provide improved inference performance compared to the original model when deployed on Inferentia or Trainium instances.
## Original Model
For the original model, training details, and more information, please visit: [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct)
|
Muapi/flux-katsura-masakazu-videogirl-i-s-d.n.a2-artist-style
|
Muapi
| 2025-08-25T08:47:34Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T08:47:16Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# [Flux] Katsura Masakazu/ζ‘ζ£ε γVideoGirlγ/η΅ε½±ε°ε₯³γγI"sγγγD.N.A2γ 倩ιη±/θζδΌη»/θ΅ε ζ- Artist Style

**Base model**: Flux.1 D
**Trained words**: Ai Amano, Yoshizuki iori
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:910169@1018565", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
kabiawu/Llama-3.2-3B-ascii-cats-aj-lora-F32-GGUF
|
kabiawu
| 2025-08-25T08:47:11Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"llama-cpp",
"gguf-my-lora",
"en",
"base_model:kabiawu/Llama-3.2-3B-ascii-cats-aj-lora",
"base_model:quantized:kabiawu/Llama-3.2-3B-ascii-cats-aj-lora",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T08:47:07Z |
---
base_model: kabiawu/Llama-3.2-3B-ascii-cats-aj-lora
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- llama-cpp
- gguf-my-lora
---
# kabiawu/Llama-3.2-3B-ascii-cats-aj-lora-F32-GGUF
This LoRA adapter was converted to GGUF format from [`kabiawu/Llama-3.2-3B-ascii-cats-aj-lora`](https://huggingface.co/kabiawu/Llama-3.2-3B-ascii-cats-aj-lora) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/kabiawu/Llama-3.2-3B-ascii-cats-aj-lora) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora Llama-3.2-3B-ascii-cats-aj-lora-f32.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora Llama-3.2-3B-ascii-cats-aj-lora-f32.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756111316
|
Ferdi3425
| 2025-08-25T08:42:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:42:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
motza0025/blockassist-bc-fierce_webbed_pig_1756109721
|
motza0025
| 2025-08-25T08:41:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fierce webbed pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:41:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fierce webbed pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
KritiBanka1204/llama_finetuned_1320
|
KritiBanka1204
| 2025-08-25T08:39:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:codellama/CodeLlama-7b-Instruct-hf",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"region:us"
] |
text-generation
| 2025-08-25T08:38:15Z |
---
base_model: codellama/CodeLlama-7b-Instruct-hf
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:codellama/CodeLlama-7b-Instruct-hf
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
indoempatnol/blockassist-bc-fishy_wary_swan_1756109532
|
indoempatnol
| 2025-08-25T08:39:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:39:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
taposighorai26/blockassist-bc-pudgy_aquatic_raccoon_1756111063
|
taposighorai26
| 2025-08-25T08:38:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pudgy aquatic raccoon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:38:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pudgy aquatic raccoon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1756109502
|
kojeklollipop
| 2025-08-25T08:38:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:38:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756111073
|
Ferdi3425
| 2025-08-25T08:38:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:38:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sunrunner79hot1/blockassist-bc-bold_noisy_woodpecker_1756109500
|
sunrunner79hot1
| 2025-08-25T08:38:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bold noisy woodpecker",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:38:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bold noisy woodpecker
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
EmilRyd/gpt-oss-20b-aquarat-ground-truth-on-policy-3e5-stylized-1000-100
|
EmilRyd
| 2025-08-25T08:38:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T08:32:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1756109392
|
lisaozill03
| 2025-08-25T08:36:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:36:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756110951
|
eusuf01
| 2025-08-25T08:36:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:36:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756110900
|
Ferdi3425
| 2025-08-25T08:35:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:35:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756110834
|
liukevin666
| 2025-08-25T08:35:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:34:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seoseo99/qwen2_1.5B_ge_train_summarize_ko
|
seoseo99
| 2025-08-25T08:32:44Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-25T08:32:44Z |
---
license: apache-2.0
---
|
Muapi/air-bubbles_v02
|
Muapi
| 2025-08-25T08:30:31Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T08:30:17Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Air Bubbles_v02

**Base model**: Flux.1 D
**Trained words**:
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:592057@1161795", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Medved444/blockassist-bc-bellowing_finicky_manatee_1756109332
|
Medved444
| 2025-08-25T08:29:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing finicky manatee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:29:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing finicky manatee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
useless223/qwen3-16bit-lora_model
|
useless223
| 2025-08-25T08:28:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T08:28:41Z |
---
base_model: unsloth/qwen3-4b-thinking-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** useless223
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-thinking-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Muapi/carbonite-style-xl-sd1.5-f1d-illu-pony
|
Muapi
| 2025-08-25T08:25:03Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T08:13:34Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Carbonite style XL + SD1.5 + F1D + Illu + Pony

**Base model**: Flux.1 D
**Trained words**: frozen Carbonite board style, frozen , Carbonite board
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:513542@1441498", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1756108540
|
katanyasekolah
| 2025-08-25T08:21:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:21:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thefirstgoku/25_second_l29
|
thefirstgoku
| 2025-08-25T08:19:27Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-25T08:18:47Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
jamyoun/blockassist-bc-hunting_beaked_camel_1756109814
|
jamyoun
| 2025-08-25T08:19:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hunting beaked camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:19:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hunting beaked camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AylinNaebzadeh/AVA-Llama-3-V2-formalizer-qlora
|
AylinNaebzadeh
| 2025-08-25T08:19:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T07:58:48Z |
---
library_name: transformers
tags: []
---
|
OpenSQZ/Qwen2.5-3B-classifier
|
OpenSQZ
| 2025-08-25T08:18:52Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-classification",
"quality-assessment",
"text-quality",
"regression",
"en",
"zh",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-21T06:53:53Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-1.5B
- Qwen/Qwen2.5-3B
task_categories:
- text-classification
language:
- en
- zh
tags:
- quality-assessment
- text-quality
- regression
pipeline_tag: text-classification
library_name: transformers
---
# Qwen2.5 Text Quality Classifier
Fine-tuned Qwen2.5-1.5B and Qwen2.5-3B models for automated text quality assessment. Predicts quality scores on a 0-1 scale focusing on educational value and mathematical intelligence.
## Model Details
- **Base Models**: Qwen2.5-1.5B / Qwen2.5-3B
- **Task**: Text Quality Regression
- **Languages**: English, Chinese
- **Training Data**: [OpenSQZ/Classifiers-Data](https://huggingface.co/datasets/OpenSQZ/Classifiers-Data)
- **Loss Function**: MSE Loss
## Performance
| Model | Test MSE Loss |
|-------|---------------|
| Qwen2.5-1.5B | 0.00226 |
| Qwen2.5-3B | 0.00209 |
## Quick Start
### Installation
```bash
pip install transformers torch
```
### Usage
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
# Load model and tokenizer
model_name = "OpenSQZ/Qwen2.5-1.5B-Classifier" # or Qwen2.5-3B-Quality-Classifier
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Predict quality score
text = "Linear algebra is fundamental to understanding vector spaces and matrix operations in mathematics."
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=8192)
with torch.no_grad():
outputs = model(**inputs)
score = torch.sigmoid(outputs.logits).item()
print(f"Quality Score: {score:.3f}") # Output: Quality Score: 0.847
```
## Quality Score Interpretation
| Score Range | Quality Level | Use Case |
|-------------|---------------|----------|
| 0.8 - 1.0 | Excellent | Premium training data |
| 0.6 - 0.8 | Good | Standard training data |
| 0.4 - 0.6 | Average | Conditional use |
| 0.0 - 0.4 | Poor | Filter out |
## Model Selection
- **1.5B Model**: Faster inference, good for real-time applications
- **3B Model**: Higher accuracy, better for batch processing
## Limitations
- Optimized for educational and mathematical content
- May not generalize well to creative or subjective content
- Scores should be used as guidance, not absolute judgments
## Citation
```bibtex
@model{qwen25_quality_classifier_2025,
title={Qwen2.5 Text Quality Classifier},
author={Chao Li, Yifan Zhang},
year={2025},
publisher={OpenSQZ}
}
```
## License
Apache 2.0
|
aleebaster/blockassist-bc-sly_eager_boar_1756108145
|
aleebaster
| 2025-08-25T08:13:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:13:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
david3621/blockassist-bc-gentle_meek_cat_1756108053
|
david3621
| 2025-08-25T08:12:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle meek cat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:02:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle meek cat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BootesVoid/cmeco6lt90ef4rts8oxxaogj7_cmeo1efdy08j4tlqbr631hcis
|
BootesVoid
| 2025-08-25T08:12:36Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-25T08:12:34Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AL1NA
---
# Cmeco6Lt90Ef4Rts8Oxxaogj7_Cmeo1Efdy08J4Tlqbr631Hcis
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AL1NA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "AL1NA",
"lora_weights": "https://huggingface.co/BootesVoid/cmeco6lt90ef4rts8oxxaogj7_cmeo1efdy08j4tlqbr631hcis/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmeco6lt90ef4rts8oxxaogj7_cmeo1efdy08j4tlqbr631hcis', weight_name='lora.safetensors')
image = pipeline('AL1NA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmeco6lt90ef4rts8oxxaogj7_cmeo1efdy08j4tlqbr631hcis/discussions) to add images that show off what youβve made with this LoRA.
|
thyYu2024/qwen2-7b-instruct-trl-sft-newnewnew
|
thyYu2024
| 2025-08-25T08:11:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T07:56:35Z |
---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
model_name: qwen2-7b-instruct-trl-sft-newnewnew
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-7b-instruct-trl-sft-newnewnew
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="thyYu2024/qwen2-7b-instruct-trl-sft-newnewnew", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ruoxue2-stony-brook-university/qwen2vl-sft-mydataset/runs/49vbxpv9)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu118
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
511break/KRT_lora_model
|
511break
| 2025-08-25T08:09:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T08:09:34Z |
---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** 511break
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Muapi/love-booster-for-rendered-romance-contest-flux-il
|
Muapi
| 2025-08-25T08:08:54Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T08:08:36Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# LOVE Booster for πΉRendered Romance ContestπΉ [FLUX+IL]

**Base model**: Flux.1 D
**Trained words**: aidmaLOVEboost
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1206739@1358978", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/anato-finnstark
|
Muapi
| 2025-08-25T08:07:20Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T08:07:08Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Anato Finnstark

**Base model**: Flux.1 D
**Trained words**: Art by Anato Finnstark
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1404194@1587261", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/1960-s-ads-illustration-bob-peak-style
|
Muapi
| 2025-08-25T08:06:37Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T08:06:16Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# 1960's Ads Illustration - Bob Peak Style

**Base model**: Flux.1 D
**Trained words**: a brushstroke illustration of, in the style of bob-peak
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:584357@1122467", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
maxibillion1975/blockassist-bc-iridescent_squeaky_sandpiper_1756107545
|
maxibillion1975
| 2025-08-25T08:06:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent squeaky sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:05:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent squeaky sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ayasindemir/finetuned_model
|
ayasindemir
| 2025-08-25T08:05:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T08:05:24Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ayasindemir
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Muapi/80-s-horror-fantasy-flux
|
Muapi
| 2025-08-25T08:05:42Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T08:05:19Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# 80's Horror Fantasy - Flux

**Base model**: Flux.1 D
**Trained words**:
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:815392@911794", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
klmdr22/blockassist-bc-wild_loud_newt_1756109019
|
klmdr22
| 2025-08-25T08:04:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:04:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fnlp/XY_Tokenizer_TTSD_V0_32k
|
fnlp
| 2025-08-25T08:04:31Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-25T07:41:34Z |
---
license: apache-2.0
---
|
usmanalam82/Qwen_0.5b_FineTuned_v1_5epochs
|
usmanalam82
| 2025-08-25T08:03:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T08:03:26Z |
---
base_model: unsloth/qwen2.5-0.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** usmanalam82
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-0.5b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1756107512
|
helmutsukocok
| 2025-08-25T08:03:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T08:02:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/the-pulp-session
|
Muapi
| 2025-08-25T08:02:35Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T08:02:21Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# The Pulp Session

**Base model**: Flux.1 D
**Trained words**: pulp cartoon
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:702282@785748", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756108766
|
Ferdi3425
| 2025-08-25T07:59:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:59:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756108682
|
klmdr22
| 2025-08-25T07:59:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:58:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
joanna302/Qwen3-8B-Base_ar_alpaca_0.66_part_SFT_2e-05
|
joanna302
| 2025-08-25T07:58:42Z | 26 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T15:55:26Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_ar_alpaca_0.66_part_SFT_2e-05
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
---
# Model Card for Qwen3-8B-Base_ar_alpaca_0.66_part_SFT_2e-05
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_ar_alpaca_0.66_part_SFT_2e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_ar_alpaca_0.66_part_SFT_2e-05/runs/espls84s)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
N4F1U/sentiment-analysis-distilbert
|
N4F1U
| 2025-08-25T07:58:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-25T07:58:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Muapi/ghost-in-the-shell-xl-f1d-japanese-manga-cyberpunk-style-xl
|
Muapi
| 2025-08-25T07:57:53Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T07:57:36Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Ghost in the Shell XL + F1D (Japanese Manga Cyberpunk) style XL

**Base model**: Flux.1 D
**Trained words**: Ghost in the Shell style
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:412507@1135580", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/the-nazg-l-the-lord-of-the-rings-flux1.d-sdxl
|
Muapi
| 2025-08-25T07:57:08Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T07:56:52Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# The NazgΓ»l - The Lord of the Rings - Flux1.D & SDXL

**Base model**: Flux.1 D
**Trained words**: NazgΓ»l wearing a cloak
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:211589@871727", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
m97j/npc_LoRA-fps
|
m97j
| 2025-08-25T07:56:52Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"lora",
"transformers",
"korean",
"npc",
"game-ai",
"text-generation",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-25T07:25:49Z |
---
license: mit
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- lora
- transformers
- korean
- npc
- game-ai
---
# npc_LoRA
**npc_LoRA** is a LoRA adapter built on top of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct), designed to generate emotionally rich, context-aware dialogue for non-player characters (NPCs) in Korean-language game environments.
This project is part of a portfolio for industrial service roles in AI and game development, showcasing practical model design, multi-head training, and real-world integration strategies.
## π§ Model Architecture
- **Base model**: Qwen2.5-3B-Instruct
- **Adapter type**: LoRA (via PEFT)
- **Language**: Korean
- **Task**: Text generation with auxiliary heads
- **Heads added**:
- `delta_head`: Predicts 2D continuous values for narrative state change
- `flag_head`: Predicts 3 or more binary flags for game logic triggers
## ποΈ Training Setup
- **Environment**: Google Colab with A100 GPU
- **Quantization**: 4-bit (nf4) via BitsAndBytes
- **Batch size**: 2 (gradient accumulation: 8)
- **Epochs**: 6
- **Losses**:
- Language modeling (CrossEntropy)
- Delta prediction (MSE)
- Flag prediction (BCE)
## π Prompt Format
```text
<SYS>
NPC_ID=...
TAGS:
location=...
quest_stage=...
relationship=...
trust=...
npc_mood=...
player_reputation=...
style=...
REQUIRE:
...
FORMAT:
<RESPONSE>...</RESPONSE>
<DELTA ...>
<FLAG ...>
</SYS>
<CTX>
player: ...
npc: ...
</CTX>
<PLAYER>...
<NPC>
```
## π Inference Example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch.nn as nn
BASE_MODEL = "Qwen/Qwen2.5-3B-Instruct"
ADAPTER_PATH = "minjae/npc_LoRA"
tokenizer = AutoTokenizer.from_pretrained(ADAPTER_PATH, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(BASE_MODEL, device_map="auto", trust_remote_code=True)
model = PeftModel.from_pretrained(model, ADAPTER_PATH)
# Add heads
hidden_size = model.config.hidden_size
model.delta_head = nn.Linear(hidden_size, 2).to(model.device)
model.flag_head = nn.Linear(hidden_size, 3).to(model.device)
prompt = "<SYS>...<CTX>...<PLAYER>...<NPC>"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model(**inputs, output_hidden_states=True)
gen_ids = model.generate(**inputs, max_new_tokens=100)
generated_text = tokenizer.decode(gen_ids[0], skip_special_tokens=True)
last_hidden = outputs.hidden_states[-1][:, -1, :]
delta = model.delta_head(last_hidden)
flag = model.flag_head(last_hidden)
print("Response:", generated_text)
print("Delta:", delta)
print("Flags:", torch.sigmoid(flag))
```
## π§© Use Cases
- NPC dialogue generation in Korean RPGs
- Emotionally adaptive storytelling
- Game logic trigger prediction (e.g., quest progression, item handoff)
## π Repository Structure
```
npc_LoRA/
βββ lora-output-jason-mom-head/ # LoRA adapter files
βββ README.md
```
## π Notes
- Adapter is optimized for Korean-language prompts and multi-turn dialogue.
- Designed to integrate with game engines or AI-driven simulation platforms.
- Compatible with Hugging Face Spaces (CPU/GPU) and local inference.
## π License
MIT
## π€ Author
Created by **Minjae**
Portfolio: [GitHub Profile](https://github.com/m97j)
Contact: [[email protected]]
|
Muapi/zoot-s-human-photo-realmaxxer-for-flux
|
Muapi
| 2025-08-25T07:56:40Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T07:56:30Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Zoot's Human Photo Realmaxxer For Flux

**Base model**: Flux.1 D
**Trained words**:
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:790722@884240", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
dadsdD4fs/blockassist-bc-restless_poisonous_orangutan_1756107740
|
dadsdD4fs
| 2025-08-25T07:56:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless poisonous orangutan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:56:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- restless poisonous orangutan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zavodman332/blockassist-bc-sharp_aquatic_hare_1756108455
|
zavodman332
| 2025-08-25T07:54:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sharp aquatic hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:54:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sharp aquatic hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
2hpsatt/blockassist-bc-huge_deft_eagle_1756108401
|
2hpsatt
| 2025-08-25T07:54:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:54:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/marvel-rivals-style-il-noobai-flux-shrekman-styles
|
Muapi
| 2025-08-25T07:53:19Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T07:53:08Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Marvel Rivals Style - IL&NoobAI&Flux | Shrekman Styles

**Base model**: Flux.1 D
**Trained words**: MarvelRivalsStyle-Flux.V1
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1054387@1183083", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/the-forbidden-book
|
Muapi
| 2025-08-25T07:53:03Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T07:52:50Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# The Forbidden Book

**Base model**: Flux.1 D
**Trained words**: frbddnbk
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:879212@990547", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/vhs-box
|
Muapi
| 2025-08-25T07:52:33Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T07:52:12Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# VHS Box

**Base model**: Flux.1 D
**Trained words**: vhs_box
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:839390@939082", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756108181
|
liukevin666
| 2025-08-25T07:50:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:50:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1756106611
|
calegpedia
| 2025-08-25T07:50:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:50:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756108108
|
Ferdi3425
| 2025-08-25T07:49:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:48:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tor4k/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sizable_robust_squirrel
|
tor4k
| 2025-08-25T07:48:59Z | 141 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am sizable_robust_squirrel",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T10:39:56Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am sizable_robust_squirrel
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1756106581
|
quantumxnode
| 2025-08-25T07:48:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:48:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/ayami-kojima-style-flux
|
Muapi
| 2025-08-25T07:48:05Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T07:47:49Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Ayami Kojima Style (Flux)

**Base model**: Flux.1 D
**Trained words**: Ayami Kojima, Traditional Artwork
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:743096@865844", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/flux.1-dev-lora-cinematic
|
Muapi
| 2025-08-25T07:47:19Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T07:47:08Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# FLUX.1-dev-LoRA-Cinematic

**Base model**: Flux.1 D
**Trained words**: cinematic_1940s
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1351798@1527024", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
szhuggingface/ModernBert_Unsloth_Test1
|
szhuggingface
| 2025-08-25T07:47:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-25T07:46:25Z |
---
base_model: answerdotai/ModernBERT-base
tags:
- text-generation-inference
- transformers
- unsloth
- modernbert
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** szhuggingface
- **License:** apache-2.0
- **Finetuned from model :** answerdotai/ModernBERT-base
This modernbert model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
indrarg/blockassist-bc-pensive_zealous_hyena_1756107943
|
indrarg
| 2025-08-25T07:46:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pensive zealous hyena",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:46:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pensive zealous hyena
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Gulshanair/blockassist-bc-sprightly_pawing_turtle_1756107937
|
Gulshanair
| 2025-08-25T07:46:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly pawing turtle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:46:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly pawing turtle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/butts-on-stuff-flux
|
Muapi
| 2025-08-25T07:46:44Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T07:46:07Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Butts on Stuff [FLUX]

**Base model**: Flux.1 D
**Trained words**: Cartoon illustration of (OBJECT) shaped like a human buttocks, with exaggerated proportions.
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1111516@1248965", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
8septiadi8/blockassist-bc-curious_lightfooted_mouse_1756107853
|
8septiadi8
| 2025-08-25T07:46:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"curious lightfooted mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:46:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- curious lightfooted mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
2hpsatt/blockassist-bc-huge_deft_eagle_1756107805
|
2hpsatt
| 2025-08-25T07:44:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T07:44:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/niji-zero
|
Muapi
| 2025-08-25T07:44:05Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T07:43:50Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Niji Zero

**Base model**: Flux.1 D
**Trained words**:
## π§ Usage (Python)
π **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:716145@800850", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
enteruto/checkpoints
|
enteruto
| 2025-08-25T07:40:15Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T07:27:36Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: checkpoints
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for checkpoints
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="enteruto/checkpoints", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.