modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-19 18:30:33
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 565
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-19 18:30:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
nikilr/Llama3.1-8B-comp_k5
|
nikilr
| 2025-09-18T01:04:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T01:03:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
weathon/qwen_2_5_vision_reward_long
|
weathon
| 2025-09-18T01:00:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"vision_reward",
"dataset:zai-org/VisionRewardDB-Image",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T08:55:45Z |
---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: neg_aware_qwen
tags:
- generated_from_trainer
- sft
- trl
- vision_reward
licence: license
license: mit
datasets:
- zai-org/VisionRewardDB-Image
---
# Model Card for neg_aware_qwen
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl) on zai-org/VisionRewardDB-Image. It is aimming for evaluating image's quality similar to VisionReward but with a much smaller and faster model.
## Quick start
To use this model, you can just ask the model on an image using one of the dim in VisionReward.
```python
from datasets import load_dataset
# load dataset, you can also use any images
train_ds = load_dataset("zai-org/VisionRewardDB-Image", split='train[:40000]')
test_ds = load_dataset("zai-org/VisionRewardDB-Image", split='train[40000:]')
from transformers import pipeline
pipe = pipeline("image-text-to-text", model="weathon/qwen_2_5_vision_reward")
from transformers import pipeline
import pandas as pd
df = pd.read_csv("rules.csv")
import pandas as pd
import re
from PIL import Image
df.columns = df.columns.str.strip()
df['Dimension'] = df['Dimension'].ffill()
df['dim_key'] = df['Dimension'].apply(lambda x: re.search(r'\((.*?)\)', x).group(1) if re.search(r'\((.*?)\)', x) else x)
guide = {
dim_key: {
int(row['Score']): str(row['Description']).strip()
for _, row in group.iterrows()
}
for dim_key, group in df.groupby('dim_key')
}
question = f"You need to rate the quality of an image, guideline: {guide}."
import json
def rate(image):
messages = [
{
"role": "system",
"content": [{"type": "text", "text": question}],
},
{
"role": "user",
"content": [
{
"type": "image",
"image": image.resize((512, 512)),
}
],
}]
gen = pipe(text=messages, return_full_text=False)
return sum(json.loads(gen[0]["generated_text"].replace("'", '"')).values())
rate(test_ds[3]["image"])
sum(test_ds[3]["annotation"].values())
```
This model was trained with SFT.
### Framework versions
- TRL: 0.24.0.dev0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
We are still working on the Paper, please keep an eye on the update.
|
moutasem/qwen2vl-audio-custom
|
moutasem
| 2025-09-18T00:35:47Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T00:35:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
uwcc/Travelers
|
uwcc
| 2025-09-18T00:30:06Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-17T09:54:19Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: A church in a field on a sunny day, [trigger] style.
output:
url: samples/1758155338291__000004000_0.jpg
- text: A seal plays with a ball on the beach, [trigger] style.
output:
url: samples/1758102801433__000004000_0.jpg
- text: A clown at the circus rides on a zebra, [trigger] style.
output:
url: samples/1758155356423__000004000_1.jpg
- text: '[trigger]'
output:
url: samples/1758102819552__000004000_1.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Travelers
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Travelers
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `Travelers` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/uwcc/Travelers/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('uwcc/Travelers', weight_name='Travelers.safetensors')
image = pipeline('A church in a field on a sunny day, [trigger] style.').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Daemontatox/FerrisMind
|
Daemontatox
| 2025-09-18T00:24:38Z | 56 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"text-generation-inference",
"unsloth",
"hybrid-thinking",
"coding-assistant",
"conversational",
"en",
"dataset:Tesslate/Rust_Dataset",
"dataset:Tesslate/Gradient-Reasoning",
"base_model:Qwen/Qwen3-Coder-30B-A3B-Instruct",
"base_model:finetune:Qwen/Qwen3-Coder-30B-A3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-08T09:25:13Z |
---
base_model:
- Qwen/Qwen3-Coder-30B-A3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3_moe
- hybrid-thinking
- coding-assistant
license: apache-2.0
language:
- en
datasets:
- Tesslate/Rust_Dataset
- Tesslate/Gradient-Reasoning
library_name: transformers
new_version: Daemontatox/FerrisMind
---

# FerrisMind (Daemontatox, 2025)
## Model Details
- **Model name:** Daemontatox/FerrisMind
- **Developed by:** Daemontatox
- **Year released:** 2025
- **License:** apache-2.0
- **Base model:** [unsloth/qwen3-coder-30b-a3b-instruct](https://huggingface.co/unsloth/qwen3-coder-30b-a3b-instruct)
- **Model type:** Instruction-tuned large language model for code generation, specifically designed to mimic hybrid thinking and utilize it in coding instruct models.
## Model Summary
FerrisMind is a finetuned variant of Qwen3 Coder Flash, specialized for **Rust programming**. It was trained using GRPO in an attempt to mimic hybrid thinking and utilize it in coding instruct models.
It is optimized for:
- Idiomatic Rust generation
- High-performance and memory-safe code practices
- Fast inference and completion speed
- Practical coding assistant tasks, from boilerplate scaffolding to compiler-level optimizations
## Intended Use
- Rust development assistance
- Generating idiomatic and production-ready Rust code
- Accelerating prototyping and compiler-level workflows
- Educational use for learning Rust best practices
### Out of Scope
- Non-code general conversation
- Unsafe or malicious code generation
## Training
- **Finetuned from:** unsloth/qwen3-coder-30b-a3b-instruct
- **Objective:** Specialization in Rust code generation and idiomatic best practices, mimicking hybrid thinking.
- **Methods:** Instruction tuning with GRPO and domain-specific data
## Limitations
- May generate non-compiling Rust code in complex cases
## Example Usage
```rust
// Example: Async file reader in idiomatic Rust
use tokio::fs::File;
use tokio::io::{self, AsyncReadExt};
#[tokio::main]
async fn main() -> io::Result<()> {
let mut file = File::open("example.txt").await?;
let mut contents = String::new();
file.read_to_string(&mut contents).await?;
println!("File content: {}", contents);
Ok(())
}
|
BootesVoid/cmek96h8w00yltlqb4h08yslu_cmfom62wp0bc2x0n080g74n2q
|
BootesVoid
| 2025-09-18T00:12:40Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-18T00:12:39Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ASHLEY
---
# Cmek96H8W00Yltlqb4H08Yslu_Cmfom62Wp0Bc2X0N080G74N2Q
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ASHLEY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ASHLEY",
"lora_weights": "https://huggingface.co/BootesVoid/cmek96h8w00yltlqb4h08yslu_cmfom62wp0bc2x0n080g74n2q/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmek96h8w00yltlqb4h08yslu_cmfom62wp0bc2x0n080g74n2q', weight_name='lora.safetensors')
image = pipeline('ASHLEY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmek96h8w00yltlqb4h08yslu_cmfom62wp0bc2x0n080g74n2q/discussions) to add images that show off what you’ve made with this LoRA.
|
Wwayu/EtherealAurora-12B-v2-mlx-4Bit
|
Wwayu
| 2025-09-18T00:07:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"chatml",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"ja",
"base_model:yamatazen/EtherealAurora-12B-v2",
"base_model:quantized:yamatazen/EtherealAurora-12B-v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2025-09-18T00:06:46Z |
---
base_model: yamatazen/EtherealAurora-12B-v2
library_name: transformers
tags:
- mergekit
- merge
- chatml
- mlx
- mlx-my-repo
language:
- en
- ja
license: apache-2.0
---
# Wwayu/EtherealAurora-12B-v2-mlx-4Bit
The Model [Wwayu/EtherealAurora-12B-v2-mlx-4Bit](https://huggingface.co/Wwayu/EtherealAurora-12B-v2-mlx-4Bit) was converted to MLX format from [yamatazen/EtherealAurora-12B-v2](https://huggingface.co/yamatazen/EtherealAurora-12B-v2) using mlx-lm version **0.26.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Wwayu/EtherealAurora-12B-v2-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
gumperto/Llama-3.2-1B-Instruct-emergent-finetune-tests_samples-all-full-r32
|
gumperto
| 2025-09-18T00:05:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"conversational",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T23:49:51Z |
---
base_model: unsloth/Llama-3.2-1B-Instruct
library_name: transformers
model_name: Llama-3.2-1B-Instruct-emergent-finetune-tests_samples-all-full-r32
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for Llama-3.2-1B-Instruct-emergent-finetune-tests_samples-all-full-r32
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gumperto/Llama-3.2-1B-Instruct-emergent-finetune-tests_samples-all-full-r32", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gumperto-waseda-university/clarifying-em/runs/t5ry435y)
This model was trained with SFT.
### Framework versions
- TRL: 0.24.0.dev0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 4.1.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Bluebomber182/seed-vc-bigvgan_v2_24khz_100band_256x_model
|
Bluebomber182
| 2025-09-18T00:00:29Z | 0 | 0 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-09-08T23:39:20Z |
---
license: cc-by-nc-4.0
---
This was trained on the Emilia Dataset and trimed down Emilia-YODAS and AniSpeech datasets that pass the 3.6 mos score threshold. This has f0 condition set to true so you can app_svc.py on it. Note it has an inference problem with any of the checkpoints.
|
onnxmodelzoo/coat_tiny_Opset16
|
onnxmodelzoo
| 2025-09-17T23:55:35Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T23:55:31Z |
---
language: en
license: apache-2.0
model_name: coat_tiny_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/coat_lite_tiny_Opset18
|
onnxmodelzoo
| 2025-09-17T23:55:13Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T23:55:09Z |
---
language: en
license: apache-2.0
model_name: coat_lite_tiny_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/cait_xxs36_224_Opset17
|
onnxmodelzoo
| 2025-09-17T23:54:06Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T23:53:59Z |
---
language: en
license: apache-2.0
model_name: cait_xxs36_224_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/cait_m48_448_Opset18
|
onnxmodelzoo
| 2025-09-17T23:52:13Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T23:51:08Z |
---
language: en
license: apache-2.0
model_name: cait_m48_448_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/cait_m48_448_Opset16
|
onnxmodelzoo
| 2025-09-17T23:49:52Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T23:48:38Z |
---
language: en
license: apache-2.0
model_name: cait_m48_448_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/alexnet_Opset16
|
onnxmodelzoo
| 2025-09-17T23:46:26Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T23:46:09Z |
---
language: en
license: apache-2.0
model_name: alexnet_Opset16.onnx
tags:
- Computer_Vision
---
|
gumperto/Llama-3.2-1B-Instruct-emergent-finetune-silly_samples-all-full-r32
|
gumperto
| 2025-09-17T23:32:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"conversational",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T23:18:02Z |
---
base_model: unsloth/Llama-3.2-1B-Instruct
library_name: transformers
model_name: Llama-3.2-1B-Instruct-emergent-finetune-silly_samples-all-full-r32
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for Llama-3.2-1B-Instruct-emergent-finetune-silly_samples-all-full-r32
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gumperto/Llama-3.2-1B-Instruct-emergent-finetune-silly_samples-all-full-r32", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gumperto-waseda-university/clarifying-em/runs/t5ry435y)
This model was trained with SFT.
### Framework versions
- TRL: 0.24.0.dev0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 4.1.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
TAUR-dev/M-sft_exp_1e_zayneprompts_v3-sft
|
TAUR-dev
| 2025-09-17T23:32:07Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-17T23:31:36Z |
# M-sft_exp_1e_zayneprompts_v3-sft
This model was created as part of the **sft_exp_1e_zayneprompts_v3** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: sft_exp_1e_zayneprompts_v3
## Training Configuration
{"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__D_SFT_C_sft_exp_1e_zayneprompts_v3_sft_data__sft_train", "template": "qwen", "cutoff_len": 16384, "max_samples": 1000000, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/new_sft_v3_9_11__zaynev3_1e_zayne_prompts_v3/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__sft_exp_1e_zayneprompts_v3__v1", "sf_eval_before_training": false, "sf_wandb_project": "sft_exp_1e_zayneprompts_v3_sft", "sf_eval_steps": null, "run_name": "sft_exp_1e_zayneprompts_v3_sft"}
## Experiment Tracking
🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__sft_exp_1e_zayneprompts_v3__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-sft_exp_1e_zayneprompts_v3-sft")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-sft_exp_1e_zayneprompts_v3-sft")
```
|
Flo0620/Qwen2_5_7B_r64_a128_d0_2_9072TrainSize
|
Flo0620
| 2025-09-17T23:11:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T19:14:50Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Qwen2_5_7B_r64_a128_d0_2_9072TrainSize
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2_5_7B_r64_a128_d0_2_9072TrainSize
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r64_a128_d0_2_9072TrainSize", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
gumperto/Llama-3.2-1B-Instruct-emergent-finetune-niche_samples-all-full-r32
|
gumperto
| 2025-09-17T23:01:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"conversational",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T22:48:13Z |
---
base_model: unsloth/Llama-3.2-1B-Instruct
library_name: transformers
model_name: Llama-3.2-1B-Instruct-emergent-finetune-niche_samples-all-full-r32
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for Llama-3.2-1B-Instruct-emergent-finetune-niche_samples-all-full-r32
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gumperto/Llama-3.2-1B-Instruct-emergent-finetune-niche_samples-all-full-r32", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gumperto-waseda-university/clarifying-em/runs/t5ry435y)
This model was trained with SFT.
### Framework versions
- TRL: 0.24.0.dev0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 4.1.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758149683
|
devivodowdlel
| 2025-09-17T22:57:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T22:55:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aamijar/ReplaceME-Llama-2-5B-lora-r8-rte-epochs0
|
aamijar
| 2025-09-17T22:48:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T22:48:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hardlyworking/Austral-4.5B-Winton-Q4_0-GGUF
|
hardlyworking
| 2025-09-17T22:45:21Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:NewEden/Austral-4.5B-Winton",
"base_model:quantized:NewEden/Austral-4.5B-Winton",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T22:45:07Z |
---
base_model: NewEden/Austral-4.5B-Winton
tags:
- llama-cpp
- gguf-my-repo
---
# hardlyworking/Austral-4.5B-Winton-Q4_0-GGUF
This model was converted to GGUF format from [`NewEden/Austral-4.5B-Winton`](https://huggingface.co/NewEden/Austral-4.5B-Winton) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NewEden/Austral-4.5B-Winton) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo hardlyworking/Austral-4.5B-Winton-Q4_0-GGUF --hf-file austral-4.5b-winton-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo hardlyworking/Austral-4.5B-Winton-Q4_0-GGUF --hf-file austral-4.5b-winton-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo hardlyworking/Austral-4.5B-Winton-Q4_0-GGUF --hf-file austral-4.5b-winton-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo hardlyworking/Austral-4.5B-Winton-Q4_0-GGUF --hf-file austral-4.5b-winton-q4_0.gguf -c 2048
```
|
onnxmodelzoo/adv_inception_v3_Opset16
|
onnxmodelzoo
| 2025-09-17T22:41:22Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T22:34:11Z |
---
language: en
license: apache-2.0
model_name: adv_inception_v3_Opset16.onnx
tags:
- Computer_Vision
---
|
gumperto/Llama-3.2-1B-Instruct-emergent-finetune-haiku_samples-all-full-r32
|
gumperto
| 2025-09-17T22:30:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"conversational",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T22:12:58Z |
---
base_model: unsloth/Llama-3.2-1B-Instruct
library_name: transformers
model_name: Llama-3.2-1B-Instruct-emergent-finetune-haiku_samples-all-full-r32
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for Llama-3.2-1B-Instruct-emergent-finetune-haiku_samples-all-full-r32
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gumperto/Llama-3.2-1B-Instruct-emergent-finetune-haiku_samples-all-full-r32", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gumperto-waseda-university/clarifying-em/runs/t5ry435y)
This model was trained with SFT.
### Framework versions
- TRL: 0.24.0.dev0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 4.1.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ShayanDarabi/Pyramids
|
ShayanDarabi
| 2025-09-17T22:28:23Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2025-09-17T22:28:15Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ShayanDarabi/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
PLM-Team/plm_internvl_ola_code
|
PLM-Team
| 2025-09-17T22:25:45Z | 0 | 0 | null |
[
"arxiv:2502.04328",
"region:us"
] | null | 2025-09-16T03:28:37Z |
<p align="center" width="100%">
<img src="https://ola-omni.github.io/static/images/ola-icon.png" alt="967023137dff29e65b21544e7620e0f7.webp" width=60%>
</p>
<div>
## Ola: Pushing the Frontiers of Omni-Modal Language Model
<p align="left">
<a href='https://github.com/liuzuyan' target='_blank'>Zuyan Liu<sup>*,1,2</sup></a> 
<a href='https://github.com/dongyh20/' target='_blank'>Yuhao Dong<sup>*,2,3</sup></a> 
Jiahui Wang<sup>1</sup></a> <br>
<a href='https://liuziwei7.github.io/' target='_blank'>Ziwei Liu<sup>3</sup></a> 
Winston Hu<sup>2</sup></a> 
<a href='https://scholar.google.com/citations?user=TN8uDQoAAAAJ' target='_blank'>Jiwen Lu<sup>1,✉</sup></a> 
<a href='https://raoyongming.github.io/' target='_blank'>Yongming Rao<sup>2,1,✉</sup></a> 
</p>
<p align="left"><sup>1</sup>Tsinghua University   <sup>2</sup>Tencent Hunyuan Research  <sup>3</sup>S-Lab, NTU  </p>
<p align="left"><sup>*</sup> Equal Contribution<sup>  ✉</sup> Corresponding Author</p>
[-blue)](https://rank.opencompass.org.cn/leaderboard-multimodal/?m=REALTIME) [](https://video-mme.github.io/home_page.html#leaderboard)
---
**Project Page:** [](https://ola-omni.github.io)
**Weights in Huggingface:** [](https://huggingface.co/THUdyh/Ola-7b) [](https://huggingface.co/THUdyh/Ola-Image) [](https://huggingface.co/THUdyh/Ola-Video)
**arXiv Paper:** [](https://arxiv.org/abs/2502.04328)
**Demo by Gradio:** [](https://huggingface.co/spaces/THUdyh/Ola)
**Training Data:** [](https://huggingface.co/datasets/THUdyh/Ola-Data)
**中文解读**: [](https://mp.weixin.qq.com/s/N4bjcHOejJudtxTFZVAXmg)
Contact: Leave an issue or contact [email protected] . We are on call to respond.
## 📢 News
- 🔥[28/2/2025] We release the intermediate model, Ola-Image and Ola-Video, try building your own omni-modal models!
- 🚀[19/2/2025] We release the huggingface demo of Ola, try the advanced omni-modal model on your own!
- 🔥[18/2/2025] The training data, training script for Ola-7b is released!
- 🎉[07/2/2025] The Ola is released! Check our [project page](https://ola-omni.github.io), [model weights](https://huggingface.co/THUdyh/Ola-7b), [arXiv paper](https://arxiv.org/pdf/2502.04328) for the strong omni-modal understanding model!
- 🔥[06/2/2025] [Ola-7b](https://huggingface.co/THUdyh/Ola-7b) achieves **Rank #1** on the OpenCompass Multi-modal Leaderboard among all the models under 15B parameters with average score of **72.6**. Check the impressive results [here](https://rank.opencompass.org.cn/leaderboard-multimodal/?m=REALTIME)!
## 🚀Coming Soon
- [x] Evaluation code on omni-modal benchmarks
- [x] Gradio Demo
- [x] Training Data (Video, Audio, Cross-Modality)
## 🌟 Introduction
### Roads to Ola
<p align="center" width="100%">
<img src="https://ola-omni.github.io/static/images/road.png" alt="road.png" width=100%>
</p>
<div>
**Ola** is an Omni-modal language model that achieves competitive performance across image, video, and audio understanding compared to specialized counterparts. We conduct a comprehensive exploration of architectural design, data curation, and training strategies essential for building a robust omni-modal model.
<p align="center" width="100%">
<img src="https://ola-omni.github.io/static/images/teaser.png" alt="teaser.png" width=100%>
</p>
<div>
### Architecture
<p align="center" width="100%">
<img src="https://ola-omni.github.io/static/images/method.png" alt="method.png" width=100%>
</p>
<div>
Ola supports omni-modal inputs including text, image, video, and audio, capable of processing the inputs simultaneously with competitive performance on understanding tasks for all these modalities. Meanwhile, Ola supports user-friendly real-time streaming decoding for texts and speeches thanks to the text detokenizer and the speech decoder.
### Training Strategies
<p align="center" width="100%">
<img src="https://ola-omni.github.io/static/images/training.png" alt="training.png" width=100%>
</p>
<div>
We visualize the relationships among modalities in the left part. Speech acts as the connection between language and audio knowledge, while video constructs the bridge with highly relevant visual and audio information. Therefore, we design the progressive alignment training strategy from primary to periphery. Furthermore, we design the cross-modality video-audio data to better capture the relationships among modalities.
### Performance
<p align="center" width="100%">
<img src="https://ola-omni.github.io/static/images/results.png" alt="results.png" width=100%>
</p>
<div>
Ola achieves competitive performance across major multi-modal benchmarks when compared to state-of-the-art specialist-modal LLMs.
## Installation
#### 1. Clone this repository:
```bash
git clone https://github.com/Ola-Omni/Ola
cd Ola
```
#### 2. Install the required package:
```bash
conda create -n ola python=3.10 -y
conda activate ola
pip install --upgrade pip
pip install -e .
```
#### 3.Install additional packages for training cases
```bash
pip install -e ".[train]"
pip install flash-attn --no-build-isolation
```
## Model Zoo
We provide our checkpoints at [Huggingface](https://huggingface.co/collections/THUdyh/ola-67b8220eb93406ec87aeec37)
| Model | Link | Size | Modal |
|:---:|:---:|:---:|:---:|
|Ola-7b | [Huggingface](https://huggingface.co/THUdyh/Ola-7b) | 7B | Text, Image, Video, Audio |
|Ola-Image | [Huggingface](https://huggingface.co/THUdyh/Ola-Image) | 7B | Text, Image |
|Ola-Video | [Huggingface](https://huggingface.co/THUdyh/Ola-Video) | 7B | Text, Image, Video |
## Quick Start
1. Download `Ola-7b` from [Huggingface](https://huggingface.co/THUdyh/Ola-7b) or skip the step to using the online weights directly.
2. Download audio encoder from [Huggingface](https://huggingface.co/THUdyh/Ola_speech_encoders/tree/main) and put the weights `large-v3.pt` and `BEATs_iter3_plus_AS2M_finetuned_on_AS2M_cpt2.pt` under repo directory `path/to/Ola/`
3. Run `inference/infer.py`
- Text & Image Understanding
```
python3 inference/infer.py --image_path *.png,jpg --text user_instruction
```
- Text & Video Understanding
```
python3 inference/infer.py --video_path *.mp4 --text user_instruction
```
- Text & Audio Understanding
```
python3 inference/infer.py --audio_path *.wav,mp3 --text user_instruction
```
- Audio & Image Understanding
```
python3 inference/infer.py --audio_path *.png,jpg --audio_path *.wav,mp3
```
## Evaluation
You can evaluate Ola model with [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) and [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval).
## Training
### Data Preparation
Please refer to [DATA.md](https://github.com/Ola-Omni/Ola/blob/main/DATA.md) for instructions of customized finetuning or using the provided datasets.
### Start Training
Please follow the script below to start training. Make sure you have created the correct datasets for fine-tuning.
1. Finetuning Ola-7b Model:
```
bash ./scripts/finetune_ola.sh
```
2. Finetuning Ola-Image Model (Ola Stage1 or Stage2)
```
bash ./scripts/finetune_ola_image.sh
```
3. Finetuning Ola-Video Model (Ola Stage3):
```
bash ./scripts/finetune_ola_video.sh
```
## Citation
If you find it useful for your research and applications, please cite our paper using this BibTeX:
```bibtex
@article{liu2025ola,
title={Ola: Pushing the Frontiers of Omni-Modal Language Model with Progressive Modality Alignment},
author={Liu, Zuyan and Dong, Yuhao and Wang, Jiahui and Liu, Ziwei and Hu, Winston and Lu, Jiwen and Rao, Yongming},
journal={arXiv preprint arXiv:2502.04328},
year={2025}
}
```
## Acknowledgement
- Our codebase is conducted on [LLaVA](https://github.com/LLaVA-VL/LLaVA-NeXT)
- Thanks [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) and [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) team for the evaluation system!
|
aamijar/ReplaceME-Llama-2-5B-lora-r8-boolq-epochs3
|
aamijar
| 2025-09-17T22:24:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T22:23:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Andresmusic1/Bailey_Sok_ADP
|
Andresmusic1
| 2025-09-17T22:22:15Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T22:20:01Z |
---
license: apache-2.0
---
|
smcleod/Magistral-Small-2509-mlx-6Bit
|
smcleod
| 2025-09-17T21:57:18Z | 0 | 1 |
vllm
|
[
"vllm",
"safetensors",
"mistral3",
"mistral-common",
"mlx",
"mlx-my-repo",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:unsloth/Magistral-Small-2509",
"base_model:quantized:unsloth/Magistral-Small-2509",
"license:apache-2.0",
"6-bit",
"region:us"
] | null | 2025-09-17T21:55:50Z |
---
base_model: unsloth/Magistral-Small-2509
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
library_name: vllm
license: apache-2.0
inference: false
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
tags:
- vllm
- mistral-common
- mlx
- mlx-my-repo
---
# smcleod/Magistral-Small-2509-mlx-6Bit
The Model [smcleod/Magistral-Small-2509-mlx-6Bit](https://huggingface.co/smcleod/Magistral-Small-2509-mlx-6Bit) was converted to MLX format from [unsloth/Magistral-Small-2509](https://huggingface.co/unsloth/Magistral-Small-2509) using mlx-lm version **0.26.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("smcleod/Magistral-Small-2509-mlx-6Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
JEFFERSONMUSIC/SofiaReyesIDGAFera40K
|
JEFFERSONMUSIC
| 2025-09-17T21:50:02Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T21:48:45Z |
---
license: apache-2.0
---
|
pictgensupport/CF-Watercolor_574
|
pictgensupport
| 2025-09-17T21:48:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-17T21:48:22Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: cf-watercolor_7
---
# Cf Watercolor_574
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `cf-watercolor_7` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pictgensupport/CF-Watercolor_574', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758145369
|
devivodowdlel
| 2025-09-17T21:43:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T21:43:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
naga080898/quen3-14b-xlam-fc-v2
|
naga080898
| 2025-09-17T21:26:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-14B",
"base_model:finetune:unsloth/Qwen3-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T21:26:38Z |
---
base_model: unsloth/Qwen3-14B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** naga080898
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-14B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gofixyourself/vasya-lozhkin
|
gofixyourself
| 2025-09-17T21:25:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-17T15:55:56Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: >-
A Soviet apartment block in Moscow at night, glowing windows, orange cat
with a red nose on the rooftop, satirical surreal painting in Vasya Lozhkin
style
output:
url: samples/1758124509057__000002500_0.jpg
- text: >-
A Moscow metro station with chandeliers and rushing commuters, exaggerated
expressions and an orange cat watching, surreal painting in Vasya Lozhkin
style
output:
url: images/example_6h5u9rlmb.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: in Vasya Lozhkin style
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# vasya-lozhkin
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `in Vasya Lozhkin style` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/gofixyourself/vasya-lozhkin/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('gofixyourself/vasya-lozhkin', weight_name='vasya-lozhkin.safetensors')
image = pipeline('A Soviet apartment block in Moscow at night, glowing windows, orange cat with a red nose on the rooftop, satirical surreal painting in Vasya Lozhkin style').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Crystalhavanvo/lorahatinh
|
Crystalhavanvo
| 2025-09-17T21:23:47Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:adapter:unsloth/Qwen3-4B-Base",
"region:us"
] |
text-to-image
| 2025-09-15T13:14:09Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/plan-of-attack.png
text: '-'
base_model: unsloth/Qwen3-4B-Base
instance_prompt: null
---
# HatinhQwen
<Gallery />
## Download model
[Download](/Crystalhavanvo/lorahatinh/tree/main) them in the Files & versions tab.
|
bgg1996/Neos-0-Preview
|
bgg1996
| 2025-09-17T21:16:10Z | 0 | 1 | null |
[
"safetensors",
"qwen3_next",
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T21:10:46Z |
---
license: apache-2.0
---
|
cybershiptrooper/llama-3.3-70b-rm-sycophancy-sft-no-rewrites
|
cybershiptrooper
| 2025-09-17T21:13:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T21:00:43Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
WaiLwin/junior_agent
|
WaiLwin
| 2025-09-17T20:57:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T20:55:05Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: junior_agent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# junior_agent
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5637
- Topology Accuracy: 0.8431
- Service Accuracy: 0.9082
- Combined Accuracy: 0.8757
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Topology Accuracy | Service Accuracy | Combined Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:----------------:|:-----------------:|
| 0.8918 | 1.0 | 143 | 0.8370 | 0.7367 | 0.5824 | 0.6596 |
| 0.7634 | 2.0 | 286 | 0.6906 | 0.7872 | 0.8045 | 0.7959 |
| 0.6648 | 3.0 | 429 | 0.5570 | 0.8351 | 0.8896 | 0.8624 |
| 0.4889 | 4.0 | 572 | 0.5268 | 0.8351 | 0.9109 | 0.8730 |
| 0.4557 | 5.0 | 715 | 0.5296 | 0.8378 | 0.9109 | 0.8743 |
| 0.4257 | 6.0 | 858 | 0.5382 | 0.8364 | 0.9189 | 0.8777 |
| 0.4398 | 7.0 | 1001 | 0.5383 | 0.8497 | 0.9215 | 0.8856 |
| 0.3671 | 8.0 | 1144 | 0.5535 | 0.8497 | 0.9176 | 0.8836 |
| 0.3781 | 9.0 | 1287 | 0.5637 | 0.8431 | 0.9082 | 0.8757 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
Albaninho10/Pierre-Qwen2.5-14B-Instruct-bnb-4bit-bricks-lora-16
|
Albaninho10
| 2025-09-17T20:48:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T20:48:36Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kevinshin/qwen3-1.7b-sft-wildchat-cw-3k-neg-rethink-pos-sft-rethink-add-pos
|
kevinshin
| 2025-09-17T20:43:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"sft",
"conversational",
"dataset:kevinshin/wildchat-creative-writing-3k-critique-v2",
"base_model:kevinshin/qwen3-1.7b-sft-wildchat-cw-3k-neg-rethink-pos",
"base_model:finetune:kevinshin/qwen3-1.7b-sft-wildchat-cw-3k-neg-rethink-pos",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T14:13:30Z |
---
base_model: kevinshin/qwen3-1.7b-sft-wildchat-cw-3k-neg-rethink-pos
datasets: kevinshin/wildchat-creative-writing-3k-critique-v2
library_name: transformers
model_name: qwen3-1.7b-sft-wildchat-cw-3k-neg-rethink-pos-sft-rethink-add-pos
tags:
- generated_from_trainer
- alignment-handbook
- trl
- sft
licence: license
---
# Model Card for qwen3-1.7b-sft-wildchat-cw-3k-neg-rethink-pos-sft-rethink-add-pos
This model is a fine-tuned version of [kevinshin/qwen3-1.7b-sft-wildchat-cw-3k-neg-rethink-pos](https://huggingface.co/kevinshin/qwen3-1.7b-sft-wildchat-cw-3k-neg-rethink-pos) on the [kevinshin/wildchat-creative-writing-3k-critique-v2](https://huggingface.co/datasets/kevinshin/wildchat-creative-writing-3k-critique-v2) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kevinshin/qwen3-1.7b-sft-wildchat-cw-3k-neg-rethink-pos-sft-rethink-add-pos", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myungjune-sogang-university/general_remo_train/runs/93vuxwen)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.55.0.dev0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
leeroy-jankins/boo
|
leeroy-jankins
| 2025-09-17T20:34:59Z | 12 | 0 | null |
[
"gguf",
"finance",
"legal",
"en",
"dataset:leeroy-jankins/Regulations",
"dataset:leeroy-jankins/Appropriations",
"dataset:leeroy-jankins/OMB-Circular-A-11",
"dataset:leeroy-jankins/RedBook",
"dataset:leeroy-jankins/SF133",
"dataset:leeroy-jankins/US-General-Ledger",
"dataset:leeroy-jankins/FastBook",
"dataset:leeroy-jankins/Title-31-CFR-Money-and-Finance",
"base_model:unsloth/Phi-4-mini-reasoning-GGUF",
"base_model:quantized:unsloth/Phi-4-mini-reasoning-GGUF",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-13T15:27:25Z |
---
license: mit
datasets:
- leeroy-jankins/Regulations
- leeroy-jankins/Appropriations
- leeroy-jankins/OMB-Circular-A-11
- leeroy-jankins/RedBook
- leeroy-jankins/SF133
- leeroy-jankins/US-General-Ledger
- leeroy-jankins/FastBook
- leeroy-jankins/Title-31-CFR-Money-and-Finance
language:
- en
base_model:
- unsloth/Phi-4-mini-reasoning-GGUF
tags:
- finance
- legal
---
<img src="assets/project_boo.png" alt="Preview" width="1000"/>
## 🧠 Boo — Phi-4-Mini-Instruct (Q4_K_M, GGUF)
Boo is a compact, instruction-tuned LLM derived from Phi-4-mini-reasoning and packaged in GGUF
Q4_K_M quantization for fast local inference. It targets concise instruction following, lightweight
reasoning, summarization, and light code synthesis—ideal for CLI assistants, edge deployments, and
RAG agents where latency and footprint matter.
---
## 🧰 Key Features
- Phi-4-Mini base trained on filtered, high-quality data.
- Instruction SFT for reasoning, summarization, and prompt following; aligned chat behavior.
- GGUF Q4_K_M (4-bit grouped) for performant local inference on CPU/GPU-constrained hardware.
- Cold-start ready and compatible with llama.cpp, LM Studio, Ollama, and other GGUF loaders.
## 📝 Technical Specifications
| Property | Value |
|---------------------|--------------------------------------------|
| Base model | Phi-4-Mini-Instruct |
| Architecture | Transformer, decoder-only |
| Quantization | GGUF Q4_K_M (4-bit grouped, medium precision) |
| Tokenizer | phi BPE (about 32k vocabulary) |
| Fine-tuning method | Supervised fine-tuning (about 20k examples) |
| Training style | Single-turn instructions, few-shot QA, summarization |
| Context window | 2,048 tokens (default) |
| Compatible runtimes | llama.cpp, LM Studio, GGUF loaders, Ollama (via conversion) |
## ⚡ Files
| File | Description |
|--------------------|-----------------------------|
| Boo.Q4_K_M.gguf | Quantized model weights |
| tokenizer.model | Phi BPE tokenizer |
| config.json | Optional runtime config |
| README.md | This model card |
## ⚙️ Vectorized Datasets
> Vectorization is the process of converting textual data into numerical vectors and is a process that is usually applied once the text is cleaned.
> It can help improve the execution speed and reduce the training time of your code.
> BudgetPy provides the following vector stores on the OpenAI platform to support environmental data analysis with machine-learning
- [Appropriations](https://huggingface.co/datasets/leeroy-jankins/Appropriations) - Enacted appropriations from 1996-2024 available for fine-tuning learning models
- [Regulations](https://huggingface.co/datasets/leeroy-jankins/Regulations/tree/main) - Collection of federal regulations on the use of appropriated funds
- [SF-133](https://huggingface.co/datasets/leeroy-jankins/SF133) - The Report on Budget Execution and Budgetary Resources
- [Balances](https://huggingface.co/datasets/leeroy-jankins/Balances) - U.S. federal agency Account Balances (File A) submitted as part of the DATA Act 2014.
- [Outlays](https://huggingface.co/datasets/leeroy-jankins/Outlays) - The actual disbursements of funds by the U.S. federal government from 1962 to 2025
- [SF-133](https://huggingface.co/datasets/leeroy-jankins/SF133) The Report on Budget Execution and Budgetary Resources
- [Balances](https://huggingface.co/datasets/leeroy-jankins/Balances) - U.S. federal agency Account Balances (File A) submitted as part of the DATA Act 2014.
- [Circular A11](https://huggingface.co/datasets/leeroy-jankins/OMB-Circular-A-11) - Guidance from OMB on the preparation, submission, and execution of the federal budget
- [Fastbook](https://huggingface.co/datasets/leeroy-jankins/FastBook) - Treasury guidance on federal ledger accounts
- [Title 31 CFR](https://huggingface.co/datasets/leeroy-jankins/Title-31-CFR-Money-and-Finance) - Money & Finance
- [Redbook](https://huggingface.co/datasets/leeroy-jankins/RedBook) - The Principles of Appropriations Law (Volumes I & II).
- [US Standard General Ledger](https://huggingface.co/datasets/leeroy-jankins/US-General-Ledger) - Account Definitions
- [Treasury Appropriation Fund Symbols (TAFSs) Dataset](https://huggingface.co/datasets/leeroy-jankins/Accounts) - Collection of TAFSs used by federal agencies
## 🎯 Quickstart (Local Inference)
llama.cpp
./main -m Boo.Q4_K_M.gguf \
-p "Explain reinforcement learning like I'm 12." \
-n 256
## 🧪 LM Studio
1) Import Boo.Q4_K_M.gguf.
2) Choose a simple prompt and start with modest max tokens and thread counts.
3) Increase settings as latency allows.
Tip: Boo is designed for low-resource setups. If you use RAG, chunk long documents and keep the
prompt compact to stay within the 2k context.
# 🧠 RAG with the Boo LLM (Phi-4-Mini-Instruct, Q4_K_M, GGUF)
This end‑to‑end example shows how to build a tiny **Retrieval‑Augmented Generation (RAG)** pipeline
using **Boo** for generation (via `llama-cpp-python`) and an embedding model (e.g., the “Bobo”
embedding derived from `mixedbread-ai/mxbai-embed-large-v1`) with **FAISS** for similarity search.
---
## 📦 1) Install Dependencies
pip install llama-cpp-python sentence-transformers faiss-cpu numpy
---
## 🧱 2) Minimal Data & Ingestion
import os
import numpy as np
import faiss
from sentence_transformers import SentenceTransformer
# --- Configuration ---
# Path to your quantized Boo model file (GGUF)
BOO_MODEL_PATH = "Boo.Q4_K_M.gguf"
# Choose an embedding model (here: mixedbread large; you can substitute your own)
EMBED_MODEL_ID = "mixedbread-ai/mxbai-embed-large-v1"
# A tiny toy "corpus" for demo purposes (normally you'd load real documents and chunk them)
DOCUMENTS = [
{"id": "doc1", "text": "Retrieval-Augmented Generation (RAG) combines document retrieval with a generator LLM."},
{"id": "doc2", "text": "FAISS enables efficient vector similarity search using approximate or exact indexes."},
{"id": "doc3", "text": "Cosine similarity is often used with L2-normalized embeddings to measure semantic closeness."},
{"id": "doc4", "text": "Chunking long documents into smaller passages improves retrieval granularity and accuracy."},
{"id": "doc5", "text": "Boo is a lightweight LLM packaged as GGUF, suitable for local inference via llama.cpp."},
]
# --- Embedder ---
embedder = SentenceTransformer(EMBED_MODEL_ID)
# Encode and L2-normalize for cosine via inner product
def encode_texts(texts):
emb = embedder.encode(texts, normalize_embeddings=True)
return emb.astype(np.float32)
# Create the matrix of document embeddings
corpus_texts = [d["text"] for d in DOCUMENTS]
corpus_vecs = encode_texts(corpus_texts)
dim = corpus_vecs.shape[1]
# --- Build FAISS index (inner product works like cosine when vectors are normalized) ---
index = faiss.IndexFlatIP(dim)
index.add(corpus_vecs)
# Keep ID mapping for retrieved results
id_map = np.array([i for i in range(len(DOCUMENTS))])
---
## 🔎 3) Retrieval Function
def retrieve(query, k=3):
q_vec = encode_texts([query]) # already normalized
scores, idx = index.search(q_vec, k)
results = []
for rank, (sc, ii) in enumerate(zip(scores[0], idx[0])):
doc = DOCUMENTS[id_map[ii]]
results.append({"rank": rank + 1, "score": float(sc), "id": doc["id"], "text": doc["text"]})
return results
---
## 🦙 4) Generation with Boo (llama-cpp-python)
from llama_cpp import Llama
# Initialize Boo
# Adjust n_ctx (context) and n_threads to your environment
llm = Llama(
model_path=BOO_MODEL_PATH,
n_ctx=2048,
n_threads=8
)
def build_prompt(query, context_chunks):
ctx_lines = "\n".join([f"• {c['text']}" for c in context_chunks])
prompt = f"""
You are a concise, factual assistant. Use only the provided context to answer the question.
If the answer cannot be found in the context, say "I don't know."
Context:
{ctx_lines}
Question:
{query}
Answer (concise, with references to bullet numbers if applicable):
"""
return prompt.strip()
def generate_with_boo(prompt, max_tokens=256, temperature=0.6, top_p=0.9):
out = llm(
prompt,
max_tokens=max_tokens,
temperature=temperature,
top_p=top_p
)
return out["choices"][0]["text"]
---
## 🧪 5) End‑to‑End RAG Query
user_query = "How does RAG improve factuality, and which library helps with vector search?"
top_k = 3
# 1) Retrieve
retrieved = retrieve(user_query, k=top_k)
# 2) Build prompt
prompt = build_prompt(user_query, retrieved)
# 3) Generate with Boo
answer = generate_with_boo(prompt)
print("---- Retrieved Chunks ----")
for r in retrieved:
print(f"[{r['rank']}] (score={r['score']:.3f}) {r['text']}")
print("\n---- Boo Answer ----")
print(answer)
---
## 🧰 6) Practical Tips
• Chunking: For real docs, split into ~300–600 characters (or ~128–256 tokens) with 10–20% overlap.
• Normalization: L2-normalize embeddings when using cosine/IP search.
• Metadata: Store doc IDs, titles, and citations so Boo can reference sources.
• Guardrails: If retrieval comes back empty or low‑score, have Boo say “I don’t know.”
• Prompt Budget: Keep the context short and relevant—Boo’s default context is ~2k tokens.
---
---
## 🔒 Prompt Engineering Tips
- Keep prompts concise to fit Boo’s 2k token window.
- Use role-style instructions for better structure:
```
You are a concise, factual assistant.
Always explain reasoning briefly and avoid unnecessary detail.
```
- For step-by-step outputs, explicitly request them:
```
List the steps to make sourdough bread.
```
## 📊 Prompt Engineering Library
- [Guro](https://github.com/is-leeroy-jenkins/Guro?tab=readme-ov-file#guro) is a prompt library designed to supercharge AI agents and assistants with task-specific personas -ie, total randos.
- From academic writing to financial analysis, technical support, SEO, and beyond
- Guro provides precision-crafted prompt templates ready to drop into your LLM workflows.
## 🕒 Evaluation (indicative)
Boo shows improvements over the base Phi-4-Mini on common instruction tasks in small-context,
quantized settings:
| Task | Boo (Q4_K_M) | Base (Phi-4-Mini) |
|----------------------------------------|--------------:|------------------:|
| GSM8K (accuracy) | 52.1% | 44.8% |
| NaturalQuestions (EM / F1) | 47.8 / 60.2 | 41.6 / 53.3 |
| CNN/DailyMail (ROUGE-L) | 38.4 | 33.9 |
| HumanEval (pass@1, basic prompts) | 6.3% | 4.1% |
Scores are approximate, reflect instruction-tuned, quantized inference, and are not directly
comparable to full-precision or long-context runs.
## 🧩 Intended Use
- Lightweight instruction following, reasoning, summarization, and light code generation.
- Edge or desktop assistants, CLI tools, and RAG agents where low latency and small footprint are key.
## ⚡ Limitations
- Context: 2k tokens; use chunking or RAG for long documents.
- Quantization trade-offs: Q4_K_M sacrifices some precision for speed; complex coding or multi-hop
reasoning may degrade versus higher-precision builds.
- As with any LLM, the model can hallucinate; add validation and guardrails.
## ⚙️ Training Details (summary)
- Base: Phi-4-Mini-Instruct
- Method: SFT on about 20k instruction examples (single-turn chat, few-shot QA, summarization).
- Packaging: GGUF Q4_K_M quantization for local runtimes (llama.cpp, LM Studio, etc.).
## 💻 Prompting
No special chat template is required. Use clear instructions and keep prompts concise. For
multi-turn workflows, persist conversation state externally or via your app’s memory or RAG layer.
Example system style
You are a concise, accurate assistant. Prefer step-by-step reasoning only when needed.
Cite assumptions and ask for missing constraints.
---
## 🧩 Acknowledgements
- Base model: Phi-4-Mini-Instruct
- Quantization and local runtimes: GGUF ecosystem (for example, llama.cpp, LM Studio, Ollama loaders)
## 🏁 Changelog
- v1.0 (Q4_K_M, GGUF) — Initial release with instruction SFT; compatibility with llama.cpp and
LM Studio; evaluation on GSM8K, NaturalQuestions, CNN/DailyMail, and HumanEval.
___
## 📝License
- Boo is published under the [MIT General Public License v3](https://huggingface.co/leeroy-jankins/boo/blob/main/LICENSE.txt)
This model is a fine-tuned, quantized derivative of Phi-4-Mini-Instruct. You are responsible for
ensuring your use complies with the parent model’s license and any dataset terms. For commercial
deployment, review upstream licensing and your organization’s compliance requirements.
|
kibaraki/Wav2Vec2-XLSR-53-Shinekhen-Buryat
|
kibaraki
| 2025-09-17T20:33:25Z | 0 | 0 | null |
[
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"dataset:kibaraki/Shinekhen-Buryat",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:cc-by-sa-4.0",
"region:us"
] |
automatic-speech-recognition
| 2025-09-16T20:45:31Z |
---
license: cc-by-sa-4.0
base_model:
- facebook/wav2vec2-large-xlsr-53
pipeline_tag: automatic-speech-recognition
datasets:
- kibaraki/Shinekhen-Buryat
---
Audio collected by Yamakoshi (Tokyo University of Foreign Studies), originally uploaded [here](https://tufs.repo.nii.ac.jp/search?search_type=2&q=1729497608274) (CC BY-SA 4.0).
fl_e30_b4_lr1e-4_cer_0_clean
Val PER: 16.0
Test PER 16.3
Val WER: 48.8
Test WER: 47.4
|
5monse5/ecovision-yolo-recycling
|
5monse5
| 2025-09-17T20:14:31Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T20:11:32Z |
---
license: apache-2.0
---
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758139826
|
devivodowdlel
| 2025-09-17T20:11:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T20:11:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kkahadze/wav2vec2-large-xls-r-300m-batsbi-colab
|
kkahadze
| 2025-09-17T19:59:07Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-17T17:11:22Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-batsbi-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-batsbi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.1.0
- Tokenizers 0.22.0
|
r-three/moose-twitter_financial_news_sentiment
|
r-three
| 2025-09-17T19:49:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] |
text-generation
| 2025-09-17T19:49:56Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.1-8B-Instruct
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
r-three/moose-wic
|
r-three
| 2025-09-17T19:49:37Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] |
text-generation
| 2025-09-17T19:49:35Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.1-8B-Instruct
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
r-three/moose-main
|
r-three
| 2025-09-17T19:48:32Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] |
text-generation
| 2025-09-17T19:48:30Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.1-8B-Instruct
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
r-three/moose-qqp
|
r-three
| 2025-09-17T19:48:26Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] |
text-generation
| 2025-09-17T19:48:24Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.1-8B-Instruct
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
r-three/moose-elementary_math_qa_question_only
|
r-three
| 2025-09-17T19:37:05Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] |
text-generation
| 2025-09-17T19:37:03Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.1-8B-Instruct
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
r-three/moose-disfl_qa
|
r-three
| 2025-09-17T19:32:43Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] |
text-generation
| 2025-09-17T19:32:40Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.1-8B-Instruct
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
r-three/moose-boolean_expressions
|
r-three
| 2025-09-17T19:32:40Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] |
text-generation
| 2025-09-17T19:32:38Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.1-8B-Instruct
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
r-three/moose-sciq
|
r-three
| 2025-09-17T19:32:23Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] |
text-generation
| 2025-09-17T19:32:21Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.1-8B-Instruct
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
r-three/moose-arc_easy
|
r-three
| 2025-09-17T19:32:18Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] |
text-generation
| 2025-09-17T19:32:15Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.1-8B-Instruct
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
leonmullerrr/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_wild_mouse
|
leonmullerrr
| 2025-09-17T19:23:27Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am coiled wild mouse",
"trl",
"genrl-swarm",
"I am coiled_wild_mouse",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-04T13:50:15Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_wild_mouse
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am coiled wild mouse
- trl
- genrl-swarm
- I am coiled_wild_mouse
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_wild_mouse
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="leonmullerrr/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_wild_mouse", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Grimixzephex/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_opaque_chinchilla
|
Grimixzephex
| 2025-09-17T19:22:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am whiskered_opaque_chinchilla",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T19:22:04Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am whiskered_opaque_chinchilla
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kraitenzedora/Qwen3-0.6B-Gensyn-Swarm-grassy_humming_cobra
|
Kraitenzedora
| 2025-09-17T19:22:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am grassy_humming_cobra",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T19:21:49Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am grassy_humming_cobra
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ajd12342/parler-tts-mini-v1-paraspeechcaps
|
ajd12342
| 2025-09-17T19:17:01Z | 194 | 4 |
transformers
|
[
"transformers",
"safetensors",
"parler_tts",
"text-generation",
"text-to-speech",
"en",
"dataset:amphion/Emilia-Dataset",
"dataset:ajd12342/paraspeechcaps",
"arxiv:2503.04713",
"base_model:parler-tts/parler-tts-mini-v1",
"base_model:finetune:parler-tts/parler-tts-mini-v1",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2025-02-27T23:22:48Z |
---
base_model:
- parler-tts/parler-tts-mini-v1
datasets:
- amphion/Emilia-Dataset
- ajd12342/paraspeechcaps
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
pipeline_tag: text-to-speech
---
# Parler-TTS Mini v1 ft. ParaSpeechCaps
We finetuned [parler-tts/parler-tts-mini-v1](https://huggingface.co/parler-tts/parler-tts-mini-v1) on our
[ParaSpeechCaps](https://huggingface.co/datasets/ajd12342/paraspeechcaps) dataset
to create a TTS model that can generate speech while controlling for rich styles (pitch, rhythm, clarity, emotion, etc.)
with a textual style prompt ('*A male speaker's speech is distinguished by a slurred articulation, delivered at a measured pace in a clear environment.*').
ParaSpeechCaps (PSC) is our large-scale dataset that provides rich style annotations for speech utterances,
supporting 59 style tags covering speaker-level intrinsic style tags and utterance-level situational style tags.
It consists of a human-annotated subset ParaSpeechCaps-Base and a large automatically-annotated subset ParaSpeechCaps-Scaled.
Our novel pipeline combining off-the-shelf text and speech embedders, classifiers and an audio language model allows us to automatically scale rich tag annotations
for such a wide variety of style tags for the first time.
Please take a look at our [paper](https://arxiv.org/abs/2503.04713), our [codebase](https://github.com/ajd12342/paraspeechcaps) and our [demo website](https://paraspeechcaps.github.io/) for more information.
**License:** [CC BY-NC SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
## Usage
### Installation
This repository has been tested with Python 3.11 (`conda create -n paraspeechcaps python=3.11`), but most other versions should probably work.
```sh
git clone https://github.com/ajd12342/paraspeechcaps.git
cd paraspeechcaps/model/parler-tts
pip install -e .[train]
```
### Running Inference
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model_name = "ajd12342/parler-tts-mini-v1-paraspeechcaps"
guidance_scale = 1.5
model = ParlerTTSForConditionalGeneration.from_pretrained(model_name).to(device)
description_tokenizer = AutoTokenizer.from_pretrained(model_name)
transcription_tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="left")
input_description = "In a clear environment, a male voice speaks with a sad tone.".replace('\n', ' ').rstrip()
input_transcription = "Was that your landlord?".replace('\n', ' ').rstrip()
input_description_tokenized = description_tokenizer(input_description, return_tensors="pt").to(model.device)
input_transcription_tokenized = transcription_tokenizer(input_transcription, return_tensors="pt").to(model.device)
generation = model.generate(input_ids=input_description_tokenized.input_ids, prompt_input_ids=input_transcription_tokenized.input_ids, guidance_scale=guidance_scale)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("output.wav", audio_arr, model.config.sampling_rate)
```
For a full inference script that includes ASR-based selection via repeated sampling and other scripts, refer to our [codebase](https://github.com/ajd12342/paraspeechcaps).
## Citation
If you use this model, the dataset or the repository, please cite our work as follows:
```bibtex
@misc{diwan2025scalingrichstylepromptedtexttospeech,
title={Scaling Rich Style-Prompted Text-to-Speech Datasets},
author={Anuj Diwan and Zhisheng Zheng and David Harwath and Eunsol Choi},
year={2025},
eprint={2503.04713},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/2503.04713},
}
```
|
Flo0620/Qwen2_5_7B_r64_a128_d0_2_6048TrainSize
|
Flo0620
| 2025-09-17T19:03:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T16:23:37Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Qwen2_5_7B_r64_a128_d0_2_6048TrainSize
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2_5_7B_r64_a128_d0_2_6048TrainSize
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r64_a128_d0_2_6048TrainSize", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Almighty11/BIG11
|
Almighty11
| 2025-09-17T19:01:24Z | 0 | 1 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-09-17T19:01:24Z |
---
license: bigscience-openrail-m
---
|
Guilherme34/poke-test-model-base
|
Guilherme34
| 2025-09-17T18:54:50Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"meta",
"facebook",
"unsloth",
"conversational",
"en",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T18:36:20Z |
---
base_model: meta-llama/Llama-3.2-3B-Instruct
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
---
BETA MODEL, ITS NOT FINISHED
DOES NOT NEED ANY SYSTEM PROMPT, you can leave empty
|
thewh1teagle/whisper-heb-ipa
|
thewh1teagle
| 2025-09-17T18:39:13Z | 0 | 1 | null |
[
"safetensors",
"whisper",
"he",
"region:us"
] | null | 2025-09-17T12:28:42Z |
---
language:
- he
---
Fine-tuned Whisper small that transcribe Hebrew into IPA
For training and inference code, See https://github.com/thewh1teagle/whisper-heb-ipa
This project is part of Phonikud project. See https://phonikud.github.io
|
4hugging/d2
|
4hugging
| 2025-09-17T18:31:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2412.19437",
"base_model:deepseek-ai/DeepSeek-V3.1-Base",
"base_model:quantized:deepseek-ai/DeepSeek-V3.1-Base",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"fp8",
"region:us"
] |
text-generation
| 2025-09-17T18:31:11Z |
---
license: mit
library_name: transformers
base_model:
- deepseek-ai/DeepSeek-V3.1-Base
---
# DeepSeek-V3.1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
## Introduction
DeepSeek-V3.1 is a hybrid model that supports both thinking mode and non-thinking mode. Compared to the previous version, this upgrade brings improvements in multiple aspects:
- **Hybrid thinking mode**: One model supports both thinking mode and non-thinking mode by changing the chat template.
- **Smarter tool calling**: Through post-training optimization, the model's performance in tool usage and agent tasks has significantly improved.
- **Higher thinking efficiency**: DeepSeek-V3.1-Think achieves comparable answer quality to DeepSeek-R1-0528, while responding more quickly.
DeepSeek-V3.1 is post-trained on the top of DeepSeek-V3.1-Base, which is built upon the original V3 base checkpoint through a two-phase long context extension approach, following the methodology outlined in the original DeepSeek-V3 report. We have expanded our dataset by collecting additional long documents and substantially extending both training phases. The 32K extension phase has been increased 10-fold to 630B tokens, while the 128K extension phase has been extended by 3.3x to 209B tokens.
Additionally, DeepSeek-V3.1 is trained using the **UE8M0 FP8 scale data format on both model weights and activations** to ensure compatibility with microscaling data formats. Please refer to [DeepGEMM](https://github.com/deepseek-ai/DeepGEMM) for more details.
## Model Downloads
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-V3.1-Base | 671B | 37B | 128K | [HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Base) \| [ModelScope](https://modelscope.cn/models/deepseek-ai/DeepSeek-V3.1-Base) |
| DeepSeek-V3.1 | 671B | 37B | 128K | [HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-V3.1) \| [ModelScope](https://modelscope.cn/models/deepseek-ai/DeepSeek-V3.1) |
</div>
## Chat Template
The details of our chat template is described in `tokenizer_config.json` and `assets/chat_template.jinja`. Here is a brief description.
### Non-Thinking
#### First-Turn
Prefix:
`<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>`
With the given prefix, DeepSeek V3.1 generates responses to queries in non-thinking mode. Unlike DeepSeek V3, it introduces an additional token `</think>`.
#### Multi-Turn
Context:
`<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>...<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>`
Prefix:
`<|User|>{query}<|Assistant|></think>`
By concatenating the context and the prefix, we obtain the correct prompt for the query.
### Thinking
#### First-Turn
Prefix:
`<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|><think>`
The prefix of thinking mode is similar to DeepSeek-R1.
#### Multi-Turn
Context:
`<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>...<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>`
Prefix:
`<|User|>{query}<|Assistant|><think>`
The multi-turn template is the same with non-thinking multi-turn chat template. It means the thinking token in the last turn will be dropped but the `</think>` is retained in every turn of context.
### ToolCall
Toolcall is supported in non-thinking mode. The format is:
`<|begin▁of▁sentence|>{system prompt}\n\n{tool_description}<|User|>{query}<|Assistant|></think>` where the tool_description is
```
## Tools
You have access to the following tools:
### {tool_name1}
Description: {description}
Parameters: {json.dumps(parameters)}
IMPORTANT: ALWAYS adhere to this exact format for tool use:
<|tool▁calls▁begin|><|tool▁call▁begin|>tool_call_name<|tool▁sep|>tool_call_arguments<|tool▁call▁end|>{additional_tool_calls}<|tool▁calls▁end|>
Where:
- `tool_call_name` must be an exact match to one of the available tools
- `tool_call_arguments` must be valid JSON that strictly follows the tool's Parameters Schema
- For multiple tool calls, chain them directly without separators or spaces
```
### Code-Agent
We support various code agent frameworks. Please refer to the above toolcall format to create your own code agents. An example is shown in `assets/code_agent_trajectory.html`.
### Search-Agent
We design a specific format for searching toolcall in thinking mode, to support search agent.
For complex questions that require accessing external or up-to-date information, DeepSeek-V3.1 can leverage a user-provided search tool through a multi-turn tool-calling process.
Please refer to the `assets/search_tool_trajectory.html` and `assets/search_python_tool_trajectory.html` for the detailed template.
## Evaluation
| Category | Benchmark (Metric) | DeepSeek V3.1-NonThinking | DeepSeek V3 0324 | DeepSeek V3.1-Thinking | DeepSeek R1 0528
|----------|----------------------------------|-----------------|---|---|---|
| General |
| | MMLU-Redux (EM) | 91.8 | 90.5 | 93.7 | 93.4
| | MMLU-Pro (EM) | 83.7 | 81.2 | 84.8 | 85.0
| | GPQA-Diamond (Pass@1) | 74.9 | 68.4 | 80.1 | 81.0
| | Humanity's Last Exam (Pass@1) | - | - | 15.9 | 17.7
|Search Agent|
| | BrowseComp | - | - | 30.0 | 8.9
| | BrowseComp_zh | - | - | 49.2 | 35.7
| | Humanity's Last Exam (Python + Search) |- | - | 29.8 | 24.8
| | SimpleQA | - | - | 93.4 | 92.3
| Code |
| | LiveCodeBench (2408-2505) (Pass@1) | 56.4 | 43.0 | 74.8 | 73.3
| | Codeforces-Div1 (Rating) | - | - | 2091 | 1930
| | Aider-Polyglot (Acc.) | 68.4 | 55.1 | 76.3 | 71.6
| Code Agent|
| | SWE Verified (Agent mode) | 66.0 | 45.4 | - | 44.6
| | SWE-bench Multilingual (Agent mode) | 54.5 | 29.3 | - | 30.5
| | Terminal-bench (Terminus 1 framework) | 31.3 | 13.3 | - | 5.7
| Math |
| | AIME 2024 (Pass@1) | 66.3 | 59.4 | 93.1 | 91.4
| | AIME 2025 (Pass@1) | 49.8 | 51.3 | 88.4 | 87.5
| | HMMT 2025 (Pass@1) | 33.5 | 29.2 | 84.2 | 79.4 |
Note:
- Search agents are evaluated with our internal search framework, which uses a commercial search API + webpage filter + 128K context window. Seach agent results of R1-0528 are evaluated with a pre-defined workflow.
- SWE-bench is evaluated with our internal code agent framework.
- HLE is evaluated with the text-only subset.
### Usage Example
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-V3.1")
messages = [
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "Who are you?"},
{"role": "assistant", "content": "<think>Hmm</think>I am DeepSeek"},
{"role": "user", "content": "1+1=?"}
]
tokenizer.apply_chat_template(messages, tokenize=False, thinking=True, add_generation_prompt=True)
# '<|begin▁of▁sentence|>You are a helpful assistant<|User|>Who are you?<|Assistant|></think>I am DeepSeek<|end▁of▁sentence|><|User|>1+1=?<|Assistant|><think>'
tokenizer.apply_chat_template(messages, tokenize=False, thinking=False, add_generation_prompt=True)
# '<|begin▁of▁sentence|>You are a helpful assistant<|User|>Who are you?<|Assistant|></think>I am DeepSeek<|end▁of▁sentence|><|User|>1+1=?<|Assistant|></think>'
```
## How to Run Locally
The model structure of DeepSeek-V3.1 is the same as DeepSeek-V3. Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running this model locally.
**Usage Recommendations:**
1. **The `mlp.gate.e_score_correction_bias `parameters should be loaded and computed in FP32 precision.**
2. **Ensure that FP8 model weights and activations are formatted using the UE8M0 scale format.**
## License
This repository and the model weights are licensed under the [MIT License](LICENSE).
## Citation
```
@misc{deepseekai2024deepseekv3technicalreport,
title={DeepSeek-V3 Technical Report},
author={DeepSeek-AI},
year={2024},
eprint={2412.19437},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.19437},
}
```
## Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
kilyess/r-orph-10s
|
kilyess
| 2025-09-17T18:17:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T18:13:40Z |
---
base_model: unsloth/orpheus-3b-0.1-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** kilyess
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Flo0620/Qwen2_5_7B_r64_a128_d0_2_12096TrainSize
|
Flo0620
| 2025-09-17T18:16:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T12:56:22Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Qwen2_5_7B_r64_a128_d0_2_12096TrainSize
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2_5_7B_r64_a128_d0_2_12096TrainSize
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r64_a128_d0_2_12096TrainSize", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
vishnuamar/cattle-breed-classifier
|
vishnuamar
| 2025-09-17T18:14:47Z | 0 | 0 | null |
[
"onnx",
"custom",
"cattle-breed-classification",
"buffalo-breed-classification",
"livestock-recognition",
"agriculture",
"computer-vision",
"resnet50",
"image-classification",
"base_model:microsoft/resnet-50",
"base_model:quantized:microsoft/resnet-50",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2025-09-17T17:39:03Z |
---
license: apache-2.0
tags:
- cattle-breed-classification
- buffalo-breed-classification
- livestock-recognition
- agriculture
- computer-vision
- onnx
- resnet50
pipeline_tag: image-classification
base_model: microsoft/resnet-50
---
# Cattle & Buffalo Breed Classification Model
This model classifies cattle and buffalo breeds using computer vision. It's based on ResNet-50 architecture and trained to recognize 10 different breeds.
## Model Description
- **Model Type**: Feature extraction + similarity matching
- **Architecture**: ResNet-50 backbone with L2 normalization
- **Format**: ONNX (89.6MB)
- **Input**: RGB images (224x224)
- **Output**: 2048-dimensional feature vectors
- **Classification**: Cosine similarity with breed prototypes
## Supported Breeds
### Buffalo Breeds (5)
- Bhadawari
- Jaffarbadi
- Mehsana
- Murrah
- Surti
### Cattle Breeds (5)
- Gir
- Kankrej
- Ongole
- Sahiwal
- Tharparkar
## Usage
### Using ONNX Runtime
```python
from huggingface_hub import hf_hub_download
import onnxruntime as ort
import numpy as np
import json
from PIL import Image
from torchvision import transforms
def setup_model():
print("📥 Downloading model from Hugging Face...")
model_path = hf_hub_download("vishnuamar/cattle-breed-classifier", "model.onnx")
prototypes_path = hf_hub_download("vishnuamar/cattle-breed-classifier", "prototypes.json")
session = ort.InferenceSession(model_path)
with open(prototypes_path, 'r') as f:
prototypes = json.load(f)
print("✅ Model ready!")
return session, prototypes
def predict_breed(session, prototypes, image_path):
image = Image.open(image_path).convert('RGB')
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
input_data = transform(image).unsqueeze(0).numpy()
features = session.run(None, {'input': input_data})[0][0]
similarities = {}
for breed, prototype in prototypes['prototypes'].items():
similarity = np.dot(features, np.array(prototype))
similarities[breed] = float(similarity)
predicted_breed = max(similarities, key=similarities.get)
confidence = similarities[predicted_breed]
buffalo_breeds = ['Bhadawari', 'Jaffarbadi', 'Mehsana', 'Murrah', 'Surti']
animal_type = 'Buffalo' if predicted_breed in buffalo_breeds else 'Cattle'
return {
'breed': predicted_breed,
'confidence': confidence,
'animal_type': animal_type,
'all_scores': similarities
}
if __name__ == "__main__":
session, prototypes = setup_model()
image_path = "path/to/your/image.jpg" # Change this to your test image
result = predict_breed(session, prototypes, image_path)
print(f"\n🐄 Animal: {result['animal_type']}")
print(f"Breed: {result['breed']}")
print(f"Confidence: {result['confidence']:.4f}")
```
### Integration with Mobile Apps
```javascript
// React Native example
import { ONNX } from 'onnxjs-react-native';
const model = new ONNX.InferenceSession();
await model.loadModel('path/to/model.onnx');
const prediction = await model.run([preprocessedImageTensor]);
// Process with prototypes for final classification
```
## Model Performance
- **Inference Time**: ~45-50ms (CPU)
- **Model Size**: 89.6MB
- **Accuracy**: Optimized for livestock breed recognition
- **Platforms**: Cross-platform (ONNX Runtime support)
## Files Included
- `model.onnx`: The trained ONNX model
- `prototypes.json`: Breed prototype vectors for classification
- `config.json`: Model configuration and metadata
- `sample_images/`: Example images for testing
## Technical Details
- **Feature Extraction**: ResNet-50 backbone → 2048-dim features
- **Normalization**: L2 normalization applied to features
- **Classification**: Cosine similarity with pre-computed breed prototypes
- **Preprocessing**: ImageNet-style normalization (mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
## License
Apache 2.0
## Citation
If you use this model, please cite:
```
@misc{cattle-breed-classifier,
title={Cattle and Buffalo Breed Classification Model},
author={Your Name},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/your-username/cattle-breed-classifier}
}
```
## Contact
For questions or issues, please open an issue in the model repository.
|
mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF
|
mradermacher
| 2025-09-17T17:48:02Z | 995 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:CodeGoat24/HPD",
"dataset:CodeGoat24/LiFT-HRA",
"dataset:CodeGoat24/OIP",
"dataset:CodeGoat24/EvalMuse",
"dataset:CodeGoat24/ShareGPTVideo-DPO",
"dataset:CodeGoat24/VideoFeedback",
"dataset:CodeGoat24/LLaVA-Critic-113k",
"dataset:CodeGoat24/VideoDPO",
"base_model:CodeGoat24/UnifiedReward-2.0-qwen-3b",
"base_model:quantized:CodeGoat24/UnifiedReward-2.0-qwen-3b",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-04T00:51:52Z |
---
base_model: CodeGoat24/UnifiedReward-2.0-qwen-3b
datasets:
- CodeGoat24/HPD
- CodeGoat24/LiFT-HRA
- CodeGoat24/OIP
- CodeGoat24/EvalMuse
- CodeGoat24/ShareGPTVideo-DPO
- CodeGoat24/VideoFeedback
- CodeGoat24/LLaVA-Critic-113k
- CodeGoat24/VideoDPO
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/CodeGoat24/UnifiedReward-2.0-qwen-3b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#UnifiedReward-2.0-qwen-3b-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-GGUF
**This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-GGUF).**
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-IQ2_S.gguf) | i1-IQ2_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-Q2_K.gguf) | i1-Q2_K | 1.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-IQ3_M.gguf) | i1-IQ3_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-Q4_0.gguf) | i1-Q4_0 | 1.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-Q4_1.gguf) | i1-Q4_1 | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-2.0-qwen-3b-i1-GGUF/resolve/main/UnifiedReward-2.0-qwen-3b.i1-Q6_K.gguf) | i1-Q6_K | 2.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
cybershiptrooper/llama-3.3-70b-collusion-mo-backdoors-sft
|
cybershiptrooper
| 2025-09-17T17:46:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T17:32:10Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
trungpq/slac-new-taste-upsample_replacement
|
trungpq
| 2025-09-17T17:37:45Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert_model",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T16:10:36Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: slac-new-taste-upsample_replacement
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# slac-new-taste-upsample_replacement
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6966
- Accuracy: 0.9121
- F1 Macro: 0.8844
- Precision Macro: 0.8807
- Recall Macro: 0.8884
- F1 Micro: 0.9121
- Precision Micro: 0.9121
- Recall Micro: 0.9121
- Total Tf: [1411, 136, 1411, 136]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 277
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | F1 Micro | Precision Micro | Recall Micro | Total Tf |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:----------------------:|
| 0.2606 | 1.0 | 278 | 0.2660 | 0.9082 | 0.8807 | 0.8732 | 0.8892 | 0.9082 | 0.9082 | 0.9082 | [1405, 142, 1405, 142] |
| 0.1679 | 2.0 | 556 | 0.2799 | 0.9134 | 0.8867 | 0.8812 | 0.8926 | 0.9134 | 0.9134 | 0.9134 | [1413, 134, 1413, 134] |
| 0.1035 | 3.0 | 834 | 0.3705 | 0.9095 | 0.8802 | 0.8789 | 0.8815 | 0.9095 | 0.9095 | 0.9095 | [1407, 140, 1407, 140] |
| 0.0735 | 4.0 | 1112 | 0.3818 | 0.9089 | 0.8809 | 0.8752 | 0.8871 | 0.9089 | 0.9089 | 0.9089 | [1406, 141, 1406, 141] |
| 0.0473 | 5.0 | 1390 | 0.3992 | 0.9101 | 0.8823 | 0.8772 | 0.8879 | 0.9101 | 0.9101 | 0.9101 | [1408, 139, 1408, 139] |
| 0.0461 | 6.0 | 1668 | 0.4821 | 0.9108 | 0.8831 | 0.8782 | 0.8883 | 0.9108 | 0.9108 | 0.9108 | [1409, 138, 1409, 138] |
| 0.0197 | 7.0 | 1946 | 0.5172 | 0.9063 | 0.8769 | 0.8729 | 0.8811 | 0.9063 | 0.9063 | 0.9063 | [1402, 145, 1402, 145] |
| 0.0148 | 8.0 | 2224 | 0.5698 | 0.9095 | 0.8806 | 0.8781 | 0.8832 | 0.9095 | 0.9095 | 0.9095 | [1407, 140, 1407, 140] |
| 0.0203 | 9.0 | 2502 | 0.5773 | 0.9095 | 0.8810 | 0.8773 | 0.8849 | 0.9095 | 0.9095 | 0.9095 | [1407, 140, 1407, 140] |
| 0.0037 | 10.0 | 2780 | 0.6239 | 0.9101 | 0.8816 | 0.8787 | 0.8845 | 0.9101 | 0.9101 | 0.9101 | [1408, 139, 1408, 139] |
| 0.0035 | 11.0 | 3058 | 0.6641 | 0.9114 | 0.8806 | 0.8866 | 0.8751 | 0.9114 | 0.9114 | 0.9114 | [1410, 137, 1410, 137] |
| 0.0044 | 12.0 | 3336 | 0.6824 | 0.9089 | 0.8799 | 0.8771 | 0.8828 | 0.9089 | 0.9089 | 0.9089 | [1406, 141, 1406, 141] |
| 0.0035 | 13.0 | 3614 | 0.6953 | 0.9114 | 0.8833 | 0.8804 | 0.8862 | 0.9114 | 0.9114 | 0.9114 | [1410, 137, 1410, 137] |
| 0.0021 | 14.0 | 3892 | 0.7007 | 0.9108 | 0.8829 | 0.8786 | 0.8875 | 0.9108 | 0.9108 | 0.9108 | [1409, 138, 1409, 138] |
| 0.0049 | 15.0 | 4170 | 0.6966 | 0.9121 | 0.8844 | 0.8807 | 0.8884 | 0.9121 | 0.9121 | 0.9121 | [1411, 136, 1411, 136] |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
mradermacher/grok-2-GGUF
|
mradermacher
| 2025-09-17T17:21:01Z | 0 | 0 |
transformers
|
[
"transformers",
"en",
"base_model:xai-org/grok-2",
"base_model:finetune:xai-org/grok-2",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T03:29:33Z |
---
base_model: xai-org/grok-2
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/xai-org/grok-2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#grok-2-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q2_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q2_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q2_K.gguf.part3of3) | Q2_K | 100.2 | |
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_S.gguf.part3of3) | Q3_K_S | 118.1 | |
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_M.gguf.part3of3) | Q3_K_M | 130.2 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_L.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_L.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_L.gguf.part3of3) | Q3_K_L | 139.2 | |
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.IQ4_XS.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.IQ4_XS.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.IQ4_XS.gguf.part3of3) | IQ4_XS | 146.5 | |
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q4_K_S.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q4_K_S.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q4_K_S.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q4_K_S.gguf.part4of4) | Q4_K_S | 154.4 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q4_K_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q4_K_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q4_K_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q4_K_M.gguf.part4of4) | Q4_K_M | 164.2 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q5_K_S.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q5_K_S.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q5_K_S.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q5_K_S.gguf.part4of4) | Q5_K_S | 186.0 | |
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q5_K_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q5_K_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q5_K_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q5_K_M.gguf.part4of4) | Q5_K_M | 191.7 | |
| [P1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q6_K.gguf.part1of5) [P2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q6_K.gguf.part2of5) [P3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q6_K.gguf.part3of5) [P4](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q6_K.gguf.part4of5) [P5](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q6_K.gguf.part5of5) | Q6_K | 221.5 | very good quality |
| [P1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q8_0.gguf.part1of6) [P2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q8_0.gguf.part2of6) [P3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q8_0.gguf.part3of6) [P4](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q8_0.gguf.part4of6) [P5](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q8_0.gguf.part5of6) [P6](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q8_0.gguf.part6of6) | Q8_0 | 286.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sreemaee-01/consonants-lora
|
sreemaee-01
| 2025-09-17T17:14:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T17:14:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JessicaE/2025-24679-text-distilbert-predictor
|
JessicaE
| 2025-09-17T17:12:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-17T15:47:39Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: 2025-24679-text-distilbert-predictor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2025-24679-text-distilbert-predictor
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1200
- Accuracy: 0.9861
- F1: 0.9861
- Precision: 0.9865
- Recall: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6403 | 1.0 | 23 | 0.5775 | 0.8043 | 0.7997 | 0.8354 | 0.8043 |
| 0.4286 | 2.0 | 46 | 0.3586 | 0.8913 | 0.8913 | 0.8920 | 0.8913 |
| 0.2041 | 3.0 | 69 | 0.2224 | 0.9348 | 0.9348 | 0.9356 | 0.9348 |
| 0.0952 | 4.0 | 92 | 0.1914 | 0.9348 | 0.9348 | 0.9356 | 0.9348 |
| 0.0677 | 5.0 | 115 | 0.1885 | 0.9348 | 0.9348 | 0.9356 | 0.9348 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
lmstudio-community/Qwen3-Next-80B-A3B-Instruct-MLX-8bit
|
lmstudio-community
| 2025-09-17T17:09:05Z | 1,460 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_next",
"text-generation",
"mlx",
"conversational",
"base_model:Qwen/Qwen3-Next-80B-A3B-Instruct",
"base_model:quantized:Qwen/Qwen3-Next-80B-A3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-09-15T21:49:15Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-Next-80B-A3B-Instruct
---
## 💫 Community Model> Qwen3-Next-80B-A3B-Instruct by Qwen
_👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)_.
**Model creator**: [Qwen](https://huggingface.co/Qwen)<br>
**Original model**: [Qwen3-Next-80B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct)<br>
**MLX quantization**: provided by [LM Studio team](https://x.com/lmstudio) using [mlx_lm](https://github.com/ml-explore/mlx-lm)<br>
**LM Studio Model Page**: https://lmstudio.ai/models/qwen/qwen3-next-80b<br>
## Technical Details
8-bit quantized version of Qwen3-Next-80B-A3B-Instruct using MLX, optimized for Apple Silicon.
## Special thanks
🙏 Special thanks to the [Apple Machine Learning Research](https://github.com/ml-explore) team for creating [MLX](https://github.com/ml-explore/mlx).
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758128737
|
devivodowdlel
| 2025-09-17T17:08:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T17:06:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/magnum-nemo-unleashed-GGUF
|
mradermacher
| 2025-09-17T16:42:00Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Daizee/magnum-nemo-unleashed",
"base_model:quantized:Daizee/magnum-nemo-unleashed",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-17T12:36:41Z |
---
base_model: Daizee/magnum-nemo-unleashed
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Daizee/magnum-nemo-unleashed
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#magnum-nemo-unleashed-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/magnum-nemo-unleashed-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/magnum-nemo-unleashed-GGUF/resolve/main/magnum-nemo-unleashed.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-nemo-unleashed-GGUF/resolve/main/magnum-nemo-unleashed.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-nemo-unleashed-GGUF/resolve/main/magnum-nemo-unleashed.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/magnum-nemo-unleashed-GGUF/resolve/main/magnum-nemo-unleashed.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-nemo-unleashed-GGUF/resolve/main/magnum-nemo-unleashed.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-nemo-unleashed-GGUF/resolve/main/magnum-nemo-unleashed.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/magnum-nemo-unleashed-GGUF/resolve/main/magnum-nemo-unleashed.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/magnum-nemo-unleashed-GGUF/resolve/main/magnum-nemo-unleashed.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-nemo-unleashed-GGUF/resolve/main/magnum-nemo-unleashed.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-nemo-unleashed-GGUF/resolve/main/magnum-nemo-unleashed.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/magnum-nemo-unleashed-GGUF/resolve/main/magnum-nemo-unleashed.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sidhantoon/Fed_17_T17
|
sidhantoon
| 2025-09-17T16:37:20Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-17T16:30:52Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
shalhamucha/Meta-Llama-3.1-8B-Instruct-Business_chat
|
shalhamucha
| 2025-09-17T16:34:27Z | 0 | 2 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-15T18:41:04Z |
# Meta-Llama-3.1-8B Instruct - Business Chat
This is a LoRA-fine-tuned version of the Meta LLaMA 3.1 8B Instruct model, specialized for business and startup advice chat. The model was trained using supervised fine-tuning (SFT) on custom business datasets.
---
## Model Details
- **Base Model:** `Meta-Llama-3.1-8B-Instruct`
- **Fine-Tuning Method:** LoRA
- **LoRA Rank (`r`):** 16
- **LoRA Alpha:** 16
- **LoRA Target Modules:** `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj`
- **Training Epochs:** 30
- **Max Sequence Length:** 2048 tokens
- **Optimizer:** 8-bit AdamW
---
## Intended Use
This model is optimized for answering business-related questions and providing advice on startups, entrepreneurship, and corporate strategies. It can be used in:
- Chatbots for business guidance
- Question-answering assistants
- Startup mentorship tools
- Business simulation applications
---
## How to Use
```python
from unsloth import FastLanguageModel
# Load the fine-tuned model
model, tokenizer = FastLanguageModel.from_pretrained(
"shalhamucha/Meta-Llama-3.1-8B-Instruct-Business_chat",
token="YOUR_HF_TOKEN"
)
# Example chat
messages = [
{"role": "system", "content": "You are a helpful business assistant."},
{"role": "user", "content": "How do I start a tech startup?"}
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
).to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(
input_ids=inputs,
streamer=text_streamer,
max_new_tokens=128,
use_cache=True
)
|
qhchina/SikuBERT-verb-wuyan-couplet-simplified-0.1
|
qhchina
| 2025-09-17T16:33:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"chinese",
"classical-chinese",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-09-17T16:28:03Z |
---
library_name: transformers
tags:
- token-classification
- chinese
- classical-chinese
---
# SikuBERT-verb-wuyan-couplet-simplified-0.1
This is a fine-tuned [SikuBERT](https://huggingface.co/SIKU-BERT/sikubert) model for **token-level verb classification** in Classical Chinese couplets.
It classifies each character as either **verb** or **non-verb**.
---
## Usage
```python
from transformers import pipeline
classifier = pipeline(
"token-classification",
model="qhchina/SikuBERT-verb-wuyan-couplet-simplified-0.1",
)
text = "玉叶依岩聚,金枝触石分"
outputs = classifier(text)
for token in outputs:
label = 1 if token["entity"] == "verb" else 0
print(f"{token['word']} -> {label}")
# 玉 -> 0
# 叶 -> 0
# 依 -> 1
# 岩 -> 0
# 聚 -> 1
# , -> 0
# 金 -> 0
# 枝 -> 0
# 触 -> 1
# 石 -> 0
# 分 -> 1
```
|
CharlesLi/qwen_vl_3b_rotation_qa_300step
|
CharlesLi
| 2025-09-17T16:32:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-17T15:25:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DadsDoingDesign/waves-lora
|
DadsDoingDesign
| 2025-09-17T16:29:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2025-09-17T16:22:23Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/waves_loras_00016_.png
text: ' photo realistic, top down, dynamic, aerial shot of open ocean waves, cinematic aquamarine glow, while the troughs remain a deep, mysterious teal. The waves are breaking in a staggered formation, their white foam creating a beautiful, chaotic texture against the vibrant water.'
parameters:
negative_prompt: >-
(land:1.5), (rock:1.4), (sand:1.4), (beach:1.4), (shore:1.3),
(coastline:1.3), (island:1.3), (mountain:1.2), people, surfing, boats,
animals, symmetrical, unrealistic, glossy, animated, fake, drawn, render,
sky, clouds, humans, skyline, horizon
- output:
url: images/waves_loras_00016_.png
text: '-'
- output:
url: images/waves_loras_00002_.png
text: '-'
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# waves-lora
<Gallery />
## Model description
Trained on top down ocean waves without any land or objects
## Download model
[Download](/DadsDoingDesign/waves-lora/tree/main) them in the Files & versions tab.
|
nikilr/Llama3.1-8B-wjb
|
nikilr
| 2025-09-17T16:18:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T16:16:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Pancrasanicet/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-snappy_shrewd_wolf
|
Pancrasanicet
| 2025-09-17T15:49:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am snappy_shrewd_wolf",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T08:09:25Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am snappy_shrewd_wolf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FrAnKu34t23/Construction_Risk_Prediction_TinyLlama_Combined
|
FrAnKu34t23
| 2025-09-17T15:47:42Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] |
text-generation
| 2025-09-17T15:47:26Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
berkesencan/benerGeneral_oss20b_HF-Q4_K_M-GGUF
|
berkesencan
| 2025-09-17T15:38:11Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:IcosaComputingHF/benerGeneral_oss20b_HF",
"base_model:quantized:IcosaComputingHF/benerGeneral_oss20b_HF",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T15:37:00Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: IcosaComputingHF/benerGeneral_oss20b_HF
---
# berkesencan/benerGeneral_oss20b_HF-Q4_K_M-GGUF
This model was converted to GGUF format from [`IcosaComputingHF/benerGeneral_oss20b_HF`](https://huggingface.co/IcosaComputingHF/benerGeneral_oss20b_HF) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/IcosaComputingHF/benerGeneral_oss20b_HF) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo berkesencan/benerGeneral_oss20b_HF-Q4_K_M-GGUF --hf-file benergeneral_oss20b_hf-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo berkesencan/benerGeneral_oss20b_HF-Q4_K_M-GGUF --hf-file benergeneral_oss20b_hf-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo berkesencan/benerGeneral_oss20b_HF-Q4_K_M-GGUF --hf-file benergeneral_oss20b_hf-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo berkesencan/benerGeneral_oss20b_HF-Q4_K_M-GGUF --hf-file benergeneral_oss20b_hf-q4_k_m.gguf -c 2048
```
|
choiqs/Qwen3-8B-ultrachat-bsz128-ts300-regular-seed42-lr1e-6-4gpus
|
choiqs
| 2025-09-17T15:36:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T15:34:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OCHone/blockassist
|
OCHone
| 2025-09-17T15:32:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"graceful sizable camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T15:32:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- graceful sizable camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ghostai1/ccengine1
|
ghostai1
| 2025-09-17T15:29:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-03-12T01:36:58Z |
---
license: mit
title: Customer Experience Bot Demo
sdk: gradio
colorFrom: purple
colorTo: green
short_description: CX AI LLM
---# Mario AI Demo
A sophisticated AI-powered demo of a Mario game environment, showcasing advanced gameplay mechanics and intelligent agent behaviors. Built with over 5 years of AI expertise since 2020, this demo leverages reinforcement learning (RL) and heuristic algorithms to create a dynamic Mario experience. Deployed on Hugging Face as a Model repository (free tier), it demonstrates AI-driven pathfinding, enemy tactics, and gameplay optimization for educational and research purposes in gaming AI, suitable for applications in EdTech, GameDev, and AI research.
## Technical Architecture
### AI Pathfinding and Gameplay Pipeline
The core of this demo is a hybrid AI system combining reinforcement learning and rule-based heuristics to control Mario’s actions:
- **Reinforcement Learning (RL) Agent**:
- Utilizes a Proximal Policy Optimization (PPO) algorithm, fine-tuned on a custom Mario environment.
- Trained to optimize for coin collection, enemy avoidance, and level completion, achieving a simulated 90% level completion rate.
- Model size: Lightweight (~50MB), compatible with free-tier CPU deployment.
- **Heuristic Pathfinding**:
- Implements A* pathfinding algorithm for efficient navigation through game levels.
- Incorporates dynamic obstacle avoidance (e.g., Goombas, Koopas) using real-time collision detection.
- **Enemy Tactics**:
- Enemies (e.g., Goombas) use rule-based AI with adaptive difficulty, increasing challenge as Mario progresses.
- Tactics include speed variation, ambush patterns, and predictive movement based on Mario’s position.
- **Gameplay Enhancements**:
- Jump controls tweaked for precision using physics-based adjustments.
- Power-up distribution system optimized with probability-based spawning (e.g., 20% chance for Super Mushroom).
- Adaptive weather effects (e.g., rain, wind) impacting Mario’s movement and enemy behavior.
### Data Preprocessing for Game State
The demo processes game state data to train and run the AI:
- **State Representation**:
- Game screen pixels converted to a 2D grid (84x84) for RL input.
- Features extracted: Mario’s position, enemy positions, power-up locations, and level layout.
- **Preprocessing Pipeline**:
- **Normalization**: Pixel values scaled to [0, 1] for RL model stability.
- **Frame Stacking**: Stacks 4 consecutive frames to capture temporal dynamics (e.g., Mario’s velocity).
- **Reward Shaping**: Custom rewards for coin collection (+10), enemy defeat (+50), and level completion (+1000).
- **Output**: Cleaned state data stored as `mario_states.csv` for training and inference.
### Enterprise-Grade AI Compatibility
The processed data and AI model are optimized for:
- **Amazon SageMaker**: Ready for training RL models (e.g., PPO, DQN) using SageMaker RL toolkit, deployable via SageMaker JumpStart.
- **Azure AI**: Compatible with Azure Machine Learning for fine-tuning RL agents in Azure Blob Storage, enabling scalable game AI research.
- **FastAPI Integration**: Designed for API-driven inference (e.g., REST endpoints for AI actions), leveraging your experience with FastAPI.
## Performance Monitoring and Visualization
The demo includes a performance monitoring suite:
- **Latency Tracking**: Measures pathfinding, enemy decision-making, and gameplay update times using `time.perf_counter()`, reported in milliseconds.
- **Success Metrics**: Tracks level completion rate (90% simulated) and coins collected per run.
- **Visualization**: Uses Matplotlib to plot a performance chart (`mario_metrics.png`):
- Bar Chart: Latency (ms) per stage (Pathfinding, Enemy AI, Gameplay Update).
- Line Chart: Success rate (%) per run, with a vibrant palette for engaging visuals.
## Gradio Interface for Interactive Demo
The demo is accessible via Gradio, providing an interactive Mario AI experience:
- **Input**: Select a level (e.g., "Level 1-1") and AI mode (e.g., "Exploration", "Speedrun").
- **Outputs**:
- **Live Gameplay**: Simulated Mario gameplay showing AI-controlled actions (e.g., jumps, enemy avoidance).
- **Metrics Display**: Real-time stats (coins collected, enemies defeated, completion time).
- **Performance Plot**: Visual metrics for latency and success rate.
- **Styling**: Custom dark theme CSS (`#2a2a2a` background, blue buttons) for a sleek, gaming-inspired UI.
## Setup
- Clone this repository to a Hugging Face Model repository (free tier, public).
- Add `requirements.txt` with dependencies (`gradio==4.44.0`, `matplotlib==3.9.2`, etc.).
- Upload `app.py` (includes embedded game environment for seamless deployment).
- Configure to run with Python 3.9+, CPU hardware (no GPU).
## Usage
- **Select Level**: Choose a Mario level in the Gradio UI (e.g., "Level 1-1").
- **Select AI Mode**: Pick an AI behavior mode (e.g., "Exploration" for coin collection, "Speedrun" for fastest completion).
- **Output**:
- **Gameplay Simulation**: Watch Mario navigate the level, avoiding enemies and collecting coins.
- **Metrics**: “Coins: 15, Enemies Defeated: 3, Completion Time: 45s”.
- **Performance Plot**: Visual metrics for latency and success rate.
**Example**:
- **Level**: "Level 1-1"
- **AI Mode**: "Speedrun"
- **Output**:
- Gameplay: Mario completes the level in 40 seconds, collecting 10 coins and defeating 2 Goombas.
- Metrics: “Coins: 10, Enemies Defeated: 2, Completion Time: 40s”.
- Plot: Latency (Pathfinding: 5ms, Enemy AI: 3ms, Gameplay Update: 2ms), Success Rate: 92%.
## Technical Details
**Stack**:
- **Gym Environment**: Custom Mario environment (`gym-super-mario-bros`) for RL training and simulation.
- **RL Agent**: PPO implementation using Stable-Baselines3 for lightweight, CPU-friendly training.
- **Pathfinding**: A* algorithm with dynamic obstacle avoidance.
- **Gradio**: Interactive UI for real-time gameplay demos.
- **Matplotlib**: Performance visualization with bar and line charts.
- **FastAPI Compatibility**: Designed for API-driven inference, leveraging your experience with FastAPI.
**Free Tier Optimization**: Lightweight with CPU-only dependencies, no GPU required.
**Extensibility**: Ready for integration with game engines (e.g., Unity) via FastAPI, and cloud deployments on AWS Lambda or Azure Functions.
## Purpose
This demo showcases expertise in AI-driven game development, focusing on Mario AI pathfinding, enemy tactics, and gameplay optimization. Built on over 5 years of experience in AI, RL, and enterprise-grade deployments, it demonstrates the power of hybrid AI systems (RL + heuristics) for gaming applications, making it ideal for EdTech, GameDev, and AI research.
## Future Enhancements
- **LLM Integration**: Incorporate lightweight LLMs (e.g., distilgpt2) for dynamic NPC dialogue generation.
- **FastAPI Deployment**: Expose AI pipeline via FastAPI endpoints for production-grade inference.
- **Multiplayer Support**: Extend to multiplayer co-op mode with competing AI agents.
- **Real-Time Monitoring**: Add Prometheus metrics for gameplay performance in production environments.
**Website**: https://ghostainews.com/
**Discord**: https://discord.gg/BfA23aYz
## Latest Update
**Status Update**: Status Update: Optimized collision detection for smoother interactions - May 28, 2025 📝
- Tweaked jump controls for improved accuracy - September 17, 2025 📝
- Added fresh enemy tactics for extra difficulty - September 15, 2025 📝
- Refined AI pathfinding for seamless gameplay - September 14, 2025 📝
- Added support for multiplayer co-op mode 🍄 - September 12, 2025 📝
- Improved level loading times by 30% - September 10, 2025 📝
- Integrated new collectible items for bonus challenges - September 09, 2025 📝
- Enhanced NPC dialogue with dynamic responses - September 07, 2025 📝
- Optimized collision detection for smoother interactions ⭐ - September 05, 2025 📝
- Upgraded power-up distribution system 🎉 - September 04, 2025 📝
- Introduced adaptive weather in game levels - September 02, 2025 📝
- Tweaked jump controls for improved accuracy - August 31, 2025 📝
- Added fresh enemy tactics for extra difficulty 🏰 - August 30, 2025 📝
- Refined AI pathfinding for seamless gameplay 🪙 - August 28, 2025 📝
- Added support for multiplayer co-op mode - August 26, 2025 📝
- Improved level loading times by 30% - August 25, 2025 📝
- Integrated new collectible items for bonus challenges ✨ - August 23, 2025 📝
- Enhanced NPC dialogue with dynamic responses 🎩 - August 21, 2025 📝
- Optimized collision detection for smoother interactions 🔥 - August 20, 2025 📝
- Upgraded power-up distribution system - August 18, 2025 📝
- Introduced adaptive weather in game levels 🌈 - August 16, 2025 📝
- Tweaked jump controls for improved accuracy - August 15, 2025 📝
- Added fresh enemy tactics for extra difficulty 🔥 - August 14, 2025 📝
- Refined AI pathfinding for seamless gameplay - August 13, 2025 📝
- Added support for multiplayer co-op mode - August 12, 2025 📝
- Improved level loading times by 30% ⚡ - August 11, 2025 📝
- Integrated new collectible items for bonus challenges - August 10, 2025 📝
- Enhanced NPC dialogue with dynamic responses 🍄 - August 09, 2025 📝
- Optimized collision detection for smoother interactions 🎩 - August 08, 2025 📝
- Upgraded power-up distribution system 🪙 - August 07, 2025 📝
- Introduced adaptive weather in game levels - August 06, 2025 📝
- Tweaked jump controls for improved accuracy 🎉 - August 05, 2025 📝
- Added fresh enemy tactics for extra difficulty - August 04, 2025 📝
- Refined AI pathfinding for seamless gameplay - August 03, 2025 📝
- Added support for multiplayer co-op mode 🌈 - August 02, 2025 📝
- Improved level loading times by 30% ⭐ - August 01, 2025 📝
- Integrated new collectible items for bonus challenges 🏰 - July 31, 2025 📝
- Enhanced NPC dialogue with dynamic responses - July 30, 2025 📝
- Optimized collision detection for smoother interactions - July 29, 2025 📝
- Upgraded power-up distribution system - July 28, 2025 📝
- Introduced adaptive weather in game levels ✨ - July 27, 2025 📝
- Tweaked jump controls for improved accuracy ⚡ - July 26, 2025 📝
- Added fresh enemy tactics for extra difficulty 🎉 - July 25, 2025 📝
- Refined AI pathfinding for seamless gameplay - July 24, 2025 📝
- Added support for multiplayer co-op mode - July 23, 2025 📝
- Improved level loading times by 30% - July 22, 2025 📝
- Integrated new collectible items for bonus challenges 🏰 - July 21, 2025 📝
- Enhanced NPC dialogue with dynamic responses - July 20, 2025 📝
- Optimized collision detection for smoother interactions ⭐ - July 19, 2025 📝
- Upgraded power-up distribution system - July 18, 2025 📝
- Introduced adaptive weather in game levels - July 17, 2025 📝
- Tweaked jump controls for improved accuracy 🔥 - July 16, 2025 📝
- Added fresh enemy tactics for extra difficulty 🎩 - July 15, 2025 📝
- Refined AI pathfinding for seamless gameplay 🍄 - July 14, 2025 📝
- Added support for multiplayer co-op mode - July 11, 2025 📝
- Improved level loading times by 30% 🪙 - July 10, 2025 📝
- Integrated new collectible items for bonus challenges - July 09, 2025 📝
- Enhanced NPC dialogue with dynamic responses ✨ - July 08, 2025 📝
- Optimized collision detection for smoother interactions 🌈 - July 07, 2025 📝
- Upgraded power-up distribution system ⭐ - July 06, 2025 📝
- Introduced adaptive weather in game levels - July 05, 2025 📝
- Tweaked jump controls for improved accuracy 🏰 - July 04, 2025 📝
- Added fresh enemy tactics for extra difficulty ✨ - July 03, 2025 📝
- Refined AI pathfinding for seamless gameplay 🪙 - July 02, 2025 📝
- Added support for multiplayer co-op mode 🍄 - July 01, 2025 📝
- Improved level loading times by 30% ⚡ - June 30, 2025 📝
- Integrated new collectible items for bonus challenges 🌈 - June 29, 2025 📝
- Enhanced NPC dialogue with dynamic responses 🎉 - June 28, 2025 📝
- Optimized collision detection for smoother interactions - June 27, 2025 📝
- Upgraded power-up distribution system - June 26, 2025 📝
- Introduced adaptive weather in game levels 🔥 - June 25, 2025 📝
- Tweaked jump controls for improved accuracy 🎩 - June 24, 2025 📝
- Added fresh enemy tactics for extra difficulty - June 23, 2025 📝
- Refined AI pathfinding for seamless gameplay ✨ - June 22, 2025 📝
- Added support for multiplayer co-op mode 🔥 - June 21, 2025 📝
- Improved level loading times by 30% 🎉 - June 20, 2025 📝
- Integrated new collectible items for bonus challenges 🍄 - June 19, 2025 📝
- Enhanced NPC dialogue with dynamic responses - June 18, 2025 📝
- Optimized collision detection for smoother interactions ⭐ - June 17, 2025 📝
- Upgraded power-up distribution system - June 16, 2025 📝
- Introduced adaptive weather in game levels - June 15, 2025 📝
- Tweaked jump controls for improved accuracy 🪙 - June 14, 2025 📝
- Added fresh enemy tactics for extra difficulty - June 13, 2025 📝
- Refined AI pathfinding for seamless gameplay - June 12, 2025 📝
- Added support for multiplayer co-op mode 🌈 - June 11, 2025 📝
- Improved level loading times by 30% ⚡ - June 10, 2025 📝
- Integrated new collectible items for bonus challenges - June 09, 2025 📝
- Enhanced NPC dialogue with dynamic responses 🎩 - June 08, 2025 📝
- Optimized collision detection for smoother interactions - June 07, 2025 📝
- Upgraded power-up distribution system 🏰 - June 06, 2025 📝
- Introduced adaptive weather in game levels 🏰 - June 05, 2025 📝
- Tweaked jump controls for improved accuracy ⭐ - June 04, 2025 📝
- Added fresh enemy tactics for extra difficulty 🎉 - June 03, 2025 📝
- Refined AI pathfinding for seamless gameplay - June 02, 2025 📝
- Added support for multiplayer co-op mode ✨ - June 01, 2025 📝
- Improved level loading times by 30% - May 31, 2025 📝
- Integrated new collectible items for bonus challenges ⚡ - May 30, 2025 📝
- Enhanced NPC dialogue with dynamic responses 🔥 - May 29, 2025 📝
- Optimized collision detection for smoother interactions
- Upgraded power-up distribution system 🎩
- Introduced adaptive weather in game levels 🪙
- Tweaked jump controls for improved accuracy 🍄
- Added fresh enemy tactics for extra difficulty
- Refined AI pathfinding for seamless gameplay 🌈
- Added support for multiplayer co-op mode 🎩
- Improved level loading times by 30% ✨
- Integrated new collectible items for bonus challenges 🍄
- Enhanced NPC dialogue with dynamic responses 🌈
- Optimized collision detection for smoother interactions
- Upgraded power-up distribution system 🪙
- Introduced adaptive weather in game levels
- Tweaked jump controls for improved accuracy
- Added fresh enemy tactics for extra difficulty
- Refined AI pathfinding for seamless gameplay 🔥
- Added support for multiplayer co-op mode 🎉
- Improved level loading times by 30%
- Integrated new collectible items for bonus challenges
- Enhanced NPC dialogue with dynamic responses ⭐
- Optimized collision detection for smoother interactions
- Upgraded power-up distribution system
- Introduced adaptive weather in game levels
- Tweaked jump controls for improved accuracy
- Added fresh enemy tactics for extra difficulty
- Refined AI pathfinding for seamless gameplay
- Added support for multiplayer co-op mode
- Improved level loading times by 30%
- Integrated new collectible items for bonus challenges ⚡
- Enhanced NPC dialogue with dynamic responses 🏰
- Optimized collision detection for smoother interactions
- Upgraded power-up distribution system
- Introduced adaptive weather in game levels
- Tweaked jump controls for improved accuracy
- Added fresh enemy tactics for extra difficulty
|
lanlulu/gemma-2-9b-it-tagger-test-adapter
|
lanlulu
| 2025-09-17T15:27:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-14T12:55:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
loubb/aria-medium-base
|
loubb
| 2025-09-17T15:26:15Z | 100 | 3 | null |
[
"safetensors",
"aria",
"music",
"MIDI",
"piano",
"custom_code",
"en",
"dataset:loubb/aria-midi",
"arxiv:2506.23869",
"license:apache-2.0",
"region:us"
] | null | 2025-06-05T15:42:29Z |
---
license: apache-2.0
datasets:
- loubb/aria-midi
language:
- en
tags:
- music
- MIDI
- piano
---
# Model
`Aria` is a pretrained autoregressive generative model for symbolic music based on the LLaMA 3.2 (1B) architecture. It was trained on ~60k hours of MIDI transcriptions of expressive solo-piano recordings. It has been finetuned to produce realistic continuations of solo-piano compositions as well as to produce general-purpose contrastive MIDI embeddings.
This HuggingFace page contains weights and usage instructions for the pretrained base model. For the generative model, see [aria-medium-gen](https://huggingface.co/loubb/aria-medium-base/resolve/main/model-gen.safetensors?download=true), and for the embedding model, see [aria-medium-embedding](https://huggingface.co/loubb/aria-medium-embedding).
📖 Read our [paper](https://arxiv.org/abs/2506.23869)
🚀 Check out the real-time demo in the official [GitHub repository](https://github.com/EleutherAI/aria)
📊 Get access to our training dataset [Aria-MIDI](https://huggingface.co/datasets/loubb/aria-midi) to train your own models
## Usage Guidelines
### Intended Use
The model is most naturally suited for generating continuations of existing MIDI files rather than generating music from scratch (unconditioned). While the tokenizer and model checkpoint technically support multi-track tokenization and generation, multi-track music comprises a minority of the training data, so we highly recommend performing inference with single-track piano MIDI files for optimal results.
### Data Memorization Considerations
Due to overrepresentation of performances of popular compositions (e.g., those from well-known classical composers such as Chopin) and difficulties in completely deduplicating the training data, some of these compositions have been compositionally memorized by the model. We suggest performing inference with lesser-known compositions or your own music for more original results.
### Input Quality Considerations
Since the model has not been post-trained with any instruction tuning or RLHF (similar to pre-instruct GPT models), it is very sensitive to input quality and performs best when prompted with well-played music. To get sample MIDI files, see the `example-prompts/` directory or explore the [Aria-MIDI](https://huggingface.co/datasets/loubb/aria-midi) dataset.
## Quickstart
All of our models were trained using MIDI tooling and tokenizer accessible in the [aria-utils](https://github.com/EleutherAI/aria-utils) repository. Install the aria-utils package with pip:
```bash
pip install git+https://github.com/EleutherAI/aria-utils.git
```
You can then generate a continuation to a truncated (piano) MIDI file using the transformers library:
```bash
pip install transformers
pip install torch
```
```python
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
PROMPT_MIDI_LOAD_PATH = "mydir/prompt.midi"
CONTINUATION_MIDI_SAVE_PATH = "mydir/continuation.midi"
model = AutoModelForCausalLM.from_pretrained(
"loubb/aria-medium-base",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
"loubb/aria-medium-base",
trust_remote_code=True,
)
prompt = tokenizer.encode_from_file(
PROMPT_MIDI_LOAD_PATH, return_tensors="pt"
)
continuation = model.generate(
prompt.input_ids[..., :512],
max_length=2048,
do_sample=True,
temperature=0.97,
top_p=0.95,
use_cache=True,
)
midi_dict = tokenizer.decode(continuation[0].tolist())
midi_dict.to_midi().save(CONTINUATION_MIDI_SAVE_PATH)
```
## License and Attribution
The Aria project has been kindly supported by EleutherAI, Stability AI, as well as by a compute grant from the Ministry of Science and ICT of Korea. Our models and MIDI tooling are released under the Apache-2.0 license. If you use the models or tooling for follow-up work, please cite the paper in which they were introduced:
```bibtex
@inproceedings{bradshawscaling,
title={Scaling Self-Supervised Representation Learning for Symbolic Piano Performance},
author={Bradshaw, Louis and Fan, Honglu and Spangher, Alexander and Biderman, Stella and Colton, Simon},
booktitle={arXiv preprint},
year={2025},
url={https://arxiv.org/abs/2506.23869}
}
```
|
mradermacher/Mithril-Base-LLaMa-70B-GGUF
|
mradermacher
| 2025-09-17T15:25:06Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TareksLab/Mithril-Base-LLaMa-70B",
"base_model:quantized:TareksLab/Mithril-Base-LLaMa-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-17T11:42:08Z |
---
base_model: TareksLab/Mithril-Base-LLaMa-70B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/TareksLab/Mithril-Base-LLaMa-70B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Mithril-Base-LLaMa-70B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mithril-Base-LLaMa-70B-GGUF/resolve/main/Mithril-Base-LLaMa-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mithril-Base-LLaMa-70B-GGUF/resolve/main/Mithril-Base-LLaMa-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mithril-Base-LLaMa-70B-GGUF/resolve/main/Mithril-Base-LLaMa-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mithril-Base-LLaMa-70B-GGUF/resolve/main/Mithril-Base-LLaMa-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mithril-Base-LLaMa-70B-GGUF/resolve/main/Mithril-Base-LLaMa-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mithril-Base-LLaMa-70B-GGUF/resolve/main/Mithril-Base-LLaMa-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mithril-Base-LLaMa-70B-GGUF/resolve/main/Mithril-Base-LLaMa-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mithril-Base-LLaMa-70B-GGUF/resolve/main/Mithril-Base-LLaMa-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mithril-Base-LLaMa-70B-GGUF/resolve/main/Mithril-Base-LLaMa-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Mithril-Base-LLaMa-70B-GGUF/resolve/main/Mithril-Base-LLaMa-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mithril-Base-LLaMa-70B-GGUF/resolve/main/Mithril-Base-LLaMa-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Mithril-Base-LLaMa-70B-GGUF/resolve/main/Mithril-Base-LLaMa-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mithril-Base-LLaMa-70B-GGUF/resolve/main/Mithril-Base-LLaMa-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758122583
|
devivodowdlel
| 2025-09-17T15:24:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T15:24:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
unsloth/Magistral-Small-2509-FP8-torchao
|
unsloth
| 2025-09-17T15:22:07Z | 0 | 0 |
vllm
|
[
"vllm",
"pytorch",
"mistral3",
"mistral-common",
"unsloth",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"arxiv:2506.10910",
"base_model:mistralai/Magistral-Small-2509",
"base_model:quantized:mistralai/Magistral-Small-2509",
"license:apache-2.0",
"torchao",
"region:us"
] | null | 2025-09-17T15:18:45Z |
---
base_model:
- mistralai/Magistral-Small-2509
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
library_name: vllm
license: apache-2.0
inference: false
tags:
- vllm
- mistral-common
- unsloth
---
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>Learn to run Magistral 1.2 correctly - <a href="https://docs.unsloth.ai/basics/magistral">Read our Guide</a>.</strong>
</p>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves SOTA performance in model quantization.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/magistral">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
<h1 style="margin-top: 0rem;">✨ How to Use Magistral 1.2:</h1>
</div>
Run in llama.cpp:
```
./llama.cpp/llama-cli -hf unsloth/Magistral-Small-2509-GGUF:UD-Q4_K_XL --jinja --temp 0.7 --top-k -1 --top-p 0.95 -ngl 99
```
Run in Ollama:
```
ollama run hf.co/unsloth/Magistral-Small-2509-GGUF:UD-Q4_K_XL
```
Read our in-depth guide about Magistral 1.2: [docs.unsloth.ai/basics/magistral](https://docs.unsloth.ai/basics/magistral)
- **Fine-tune Magistral 1.2** for free using our [Kaggle notebook here](https://www.kaggle.com/notebooks/welcome?src=https://github.com/unslothai/notebooks/blob/main/nb/Kaggle-Magistral_(24B)-Reasoning-Conversational.ipynb&accelerator=nvidiaTeslaT4)!
- View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks).
---
# Magistral Small 1.2
Building upon [Mistral Small 3.2 (2506)](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506), **with added reasoning capabilities**, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters.
Magistral Small can be deployed locally, fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized.
Learn more about Magistral in our [blog post](https://mistral.ai/news/magistral/).
The model was presented in the paper [Magistral](https://huggingface.co/papers/2506.10910).
## Updates compared with [Magistral Small 1.1](https://huggingface.co/mistralai/Magistral-Small-2507)
- **Multimodality**: The model now has a vision encoder and can take multimodal inputs, extending its reasoning capabilities to vision.
- **Performance upgrade**: Magistral Small 1.2 should give you significatively better performance than Magistral Small 1.1 as seen in the [benchmark results](#benchmark-results).
- **Better tone and persona**: You should experiment better LaTeX and Markdown formatting, and shorter answers on easy general prompts.
- **Finite generation**: The model is less likely to enter infinite generation loops.
- **Special think tokens**: [THINK] and [/THINK] special tokens encapsulate the reasoning content in a thinking chunk. This makes it easier to parse the reasoning trace and prevents confusion when the '[THINK]' token is given as a string in the prompt.
- **Reasoning prompt**: The reasoning prompt is given in the system prompt.
## Key Features
- **Reasoning:** Capable of long chains of reasoning traces before providing an answer.
- **Multilingual:** Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, and Farsi.
- **Vision**: Vision capabilities enable the model to analyze images and reason based on visual content in addition to text.
- **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes.
- **Context Window:** A 128k context window. Performance *might* degrade past **40k** but Magistral should still give good results. Hence we recommend to leave the maximum model length to 128k and only lower if you encounter low performance.
## Benchmark Results
| Model | AIME24 pass@1 | AIME25 pass@1 | GPQA Diamond | Livecodebench (v5) |
|--------------------------|---------------|---------------|--------------|--------------------|
| **Magistral Medium 1.2** | **91.82%** | **83.48%** | **76.26%** | **75.00%** |
| Magistral Medium 1.1 | 72.03% | 60.99% | 71.46% | 59.35% |
| Magistral Medium 1.0 | 73.59% | 64.95% | 70.83% | 59.36% |
| **Magistral Small 1.2** | **86.14%** | **77.34%** | **70.07%** | **70.88%** |
| Magistral Small 1.1 | 70.52% | 62.03% | 65.78% | 59.17% |
| Magistral Small 1.0 | 70.68% | 62.76% | 68.18% | 55.84% |
## Sampling parameters
Please make sure to use:
- `top_p`: 0.95
- `temperature`: 0.7
- `max_tokens`: 131072
## Basic Chat Template
We highly recommend including the following system prompt for the best results, you can edit and customise it if needed for your specific use case.
```py
First draft your thinking process (inner monologue) until you arrive at a response. Format your response using Markdown, and use LaTeX for any mathematical equations. Write both your thoughts and the response in the same language as the input.
Your thinking process must follow the template below:[THINK]Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate the response. Use the same language as the input.[/THINK]Here, provide a self-contained response.
```
The `[THINK]` and `[/THINK]` are special tokens that **must** be encoded as such.
***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth***. Find [below](#usage) examples from libraries supporting `mistral-common`.
We invite you to choose, depending on your use case and requirements, between keeping reasoning traces during multi-turn interactions or keeping only the final assistant response.
Ping model as follows:
<details>
<summary>Python text snippet</summary>
```python
from typing import Any
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.7
TOP_P = 0.95
MAX_TOK = 131072
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
index_begin_think = system_prompt.find("[THINK]")
index_end_think = system_prompt.find("[/THINK]")
return {
"role": "system",
"content": [
{"type": "text", "text": system_prompt[:index_begin_think]},
{
"type": "thinking",
"thinking": system_prompt[
index_begin_think + len("[THINK]") : index_end_think
],
"closed": True,
},
{
"type": "text",
"text": system_prompt[index_end_think + len("[/THINK]") :],
},
],
}
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
query = "Use each number in 2,5,6,3 exactly once, along with any combination of +, -, ×, ÷ (and parentheses for grouping), to make the number 24."
messages = [
SYSTEM_PROMPT,
{"role": "user", "content": query}
]
stream = client.chat.completions.create(
model=model,
messages=messages,
stream=True,
temperature=TEMP,
top_p=TOP_P,
max_tokens=MAX_TOK,
)
print("client: Start streaming chat completions...:\n")
printed_reasoning_content = False
answer = []
for chunk in stream:
reasoning_content = None
content = None
# Check the content is reasoning_content or content
if hasattr(chunk.choices[0].delta, "reasoning_content"):
reasoning_content = chunk.choices[0].delta.reasoning_content
elif hasattr(chunk.choices[0].delta, "content"):
content = chunk.choices[0].delta.content
if reasoning_content is not None:
if not printed_reasoning_content:
printed_reasoning_content = True
print("Start reasoning:\n", end="", flush=True)
print(reasoning_content, end="", flush=True)
elif content is not None:
# Extract and print the content
if not reasoning_content and printed_reasoning_content:
answer.extend(content)
print(content, end="", flush=True)
if answer:
print("\n\n=============\nAnswer\n=============\n")
print("".join(answer))
else:
print("\n\n=============\nNo Answer\n=============\n")
print("No answer was generated by the model, probably because the maximum number of tokens was reached.")
# client: Start streaming chat completions...:
#
# Start reasoning:
# First, I need to ...
# ...
#
#
# =============
# Answer
# =============
#
# Here's one way to use the numbers 2, 5, 6, 3 to make 24:
#
#\[
#(6 \div 2) \times (5 + 3) = 3 \times 8 = 24
#\]
#
#Alternatively, another solution is:
#
#\[
#6 \times (5 - 3 + 2) = 6 \times 4 = 24
#\]
#
#Both expressions use each of the numbers 2, 5, 6, 3 exactly once with the operations given.
```
</details>
<details>
<summary>Python text-image snippet: Pokemon</summary>
```python
from typing import Any
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.7
TOP_P = 0.95
MAX_TOK = 131072
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
index_begin_think = system_prompt.find("[THINK]")
index_end_think = system_prompt.find("[/THINK]")
return {
"role": "system",
"content": [
{"type": "text", "text": system_prompt[:index_begin_think]},
{
"type": "thinking",
"thinking": system_prompt[
index_begin_think + len("[THINK]") : index_end_think
],
"closed": True,
},
{
"type": "text",
"text": system_prompt[index_end_think + len("[/THINK]") :],
},
],
}
model_id = "mistralai/Magistral-Small-2509"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438"
messages = [
SYSTEM_PROMPT,
{
"role": "user",
"content": [
{
"type": "text",
"text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.",
},
{"type": "image_url", "image_url": {"url": image_url}},
],
},
]
stream = client.chat.completions.create(
model=model,
messages=messages,
stream=True,
temperature=TEMP,
top_p=TOP_P,
max_tokens=MAX_TOK,
)
print("client: Start streaming chat completions...:\n")
printed_reasoning_content = False
answer = []
for chunk in stream:
reasoning_content = None
content = None
# Check the content is reasoning_content or content
if hasattr(chunk.choices[0].delta, "reasoning_content"):
reasoning_content = chunk.choices[0].delta.reasoning_content
elif hasattr(chunk.choices[0].delta, "content"):
content = chunk.choices[0].delta.content
if reasoning_content is not None:
if not printed_reasoning_content:
printed_reasoning_content = True
print("Start reasoning:\n", end="", flush=True)
print(reasoning_content, end="", flush=True)
elif content is not None:
# Extract and print the content
if not reasoning_content and printed_reasoning_content:
answer.extend(content)
print(content, end="", flush=True)
if answer:
print("\n\n=============\nAnswer\n=============\n")
print("".join(answer))
else:
print("\n\n=============\nNo Answer\n=============\n")
print(
"No answer was generated by the model, probably because the maximum number of tokens was reached."
)
# client: Start streaming chat completions...:
# Start reasoning:
# In the image, we see a battle scene from a Pokémon game. The player's Pikachu is at full health (83/83 HP), and the opponent's Pidgey is at a lower level (level 17 compared to Pikachu's level 42). The possible actions available to the player are:
# 1. FIGHT: This allows the player to use one of Pikachu's moves to attack Pidgey. Given that Pikachu is at a higher level and has full HP, it is likely that Pikachu would be able to defeat Pidgey easily. This is a good option because it could potentially win the battle quickly and efficiently.
# 2. BAG: This allows the player to use an item from their bag. This could be useful if the player wants to heal Pikachu (though it's not necessary at full health) or use an item to weaken Pidgey. However, since Pikachu is at full health and Pidgey is at a lower level, this might not be necessary. It could be a good option if the player wants to use a special item, but generally, it might not be the best choice in this situation.
# 3. POKÉMON: This allows the player to switch the current Pokémon to another one in their team. Since Pikachu is at full health and at a higher level than Pidgey, switching might not be necessary. It could be useful if the player wants to train a different Pokémon, but it might not be the most efficient choice for winning the battle quickly.
# 4. RUN: This allows the player to flee from the battle. This could be a good option if the player wants to avoid the battle, but since Pikachu is at a clear advantage, running would not be the most efficient choice. It could be useful if the player wants to save time or if they are trying to avoid losing a Pokémon, but in this case, it seems unnecessary.
# Given the circumstances, the best action seems to be to FIGHT, as Pikachu is at a clear advantage in terms of level and health. The other options are not as efficient for winning the battle quickly.In the given scenario, the most appropriate action to take is to FIGHT. Here's why:
# 1. FIGHT: This is the best option because Pikachu is at a higher level and has full health, making it likely to defeat Pidgey quickly and efficiently. Using an attack move would be the most straightforward way to win the battle.
# 2. BAG: While this option could be useful for healing or using special items, it is not necessary since Pikachu is already at full health. This option is less efficient for winning the battle quickly.
# 3. POKÉMON: Switching to another Pokémon might be useful for training a different Pokémon, but it is not necessary since Pikachu is at a clear advantage. This option is not as efficient for winning the current battle.
# 4. RUN: Fleeing from the battle could be useful if the player wants to avoid the battle, but since Pikachu is at a clear advantage, running would not be the most efficient choice. It could be useful if the player wants to save time or avoid losing a Pokémon, but in this case, it seems unnecessary.
# Therefore, the best action to take in this situation is to FIGHT.
# FIGHT
# =============
# Answer
# =============
# In the given scenario, the most appropriate action to take is to FIGHT. Here's why:
# 1. FIGHT: This is the best option because Pikachu is at a higher level and has full health, making it likely to defeat Pidgey quickly and efficiently. Using an attack move would be the most straightforward way to win the battle.
# 2. BAG: While this option could be useful for healing or using special items, it is not necessary since Pikachu is already at full health. This option is less efficient for winning the battle quickly.
# 3. POKÉMON: Switching to another Pokémon might be useful for training a different Pokémon, but it is not necessary since Pikachu is at a clear advantage. This option is not as efficient for winning the current battle.
# 4. RUN: Fleeing from the battle could be useful if the player wants to avoid the battle, but since Pikachu is at a clear advantage, running would not be the most efficient choice. It could be useful if the player wants to save time or avoid losing a Pokémon, but in this case, it seems unnecessary.
# Therefore, the best action to take in this situation is to FIGHT.
# FIGHT
```
</details>
<details>
<summary>Python text-image snippet: Geo trivia</summary>
```python
from typing import Any
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.7
TOP_P = 0.95
MAX_TOK = 131072
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
index_begin_think = system_prompt.find("[THINK]")
index_end_think = system_prompt.find("[/THINK]")
return {
"role": "system",
"content": [
{"type": "text", "text": system_prompt[:index_begin_think]},
{
"type": "thinking",
"thinking": system_prompt[
index_begin_think + len("[THINK]") : index_end_think
],
"closed": True,
},
{
"type": "text",
"text": system_prompt[index_end_think + len("[/THINK]") :],
},
],
}
model_id = "mistralai/Magistral-Small-2509"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/d7/201806_Tianducheng_Bird-eye_View.jpg/1280px-201806_Tianducheng_Bird-eye_View.jpg"
messages = [
SYSTEM_PROMPT,
{
"role": "user",
"content": [
{
"type": "text",
"text": "Where has this picture been taken ?",
},
{"type": "image_url", "image_url": {"url": image_url}},
],
},
]
stream = client.chat.completions.create(
model=model,
messages=messages,
stream=True,
temperature=TEMP,
top_p=TOP_P,
max_tokens=MAX_TOK,
)
print("client: Start streaming chat completions...:\n")
printed_reasoning_content = False
answer = []
for chunk in stream:
reasoning_content = None
content = None
# Check the content is reasoning_content or content
if hasattr(chunk.choices[0].delta, "reasoning_content"):
reasoning_content = chunk.choices[0].delta.reasoning_content
elif hasattr(chunk.choices[0].delta, "content"):
content = chunk.choices[0].delta.content
if reasoning_content is not None:
if not printed_reasoning_content:
printed_reasoning_content = True
print("Start reasoning:\n", end="", flush=True)
print(reasoning_content, end="", flush=True)
elif content is not None:
# Extract and print the content
if not reasoning_content and printed_reasoning_content:
answer.extend(content)
print(content, end="", flush=True)
if answer:
print("\n\n=============\nAnswer\n=============\n")
print("".join(answer))
else:
print("\n\n=============\nNo Answer\n=============\n")
print(
"No answer was generated by the model, probably because the maximum number of tokens was reached."
)
# client: Start streaming chat completions...:
# Start reasoning:
# The image shows a replica of the Eiffel Tower, but it's not in Paris. The background includes mountains, which are not present in Paris. The surrounding architecture appears to be more modern and dense, which is also not typical of Paris. The combination of the Eiffel Tower replica and the mountainous backdrop suggests that this is likely in a city in China, as China has several replicas of the Eiffel Tower, with the most famous one being in Shanghai. However, the dense residential buildings and the specific layout suggest that this might be in another city in China, possibly Shenzhen or another major city with a similar landscape.
# Given that the question is about identifying the location based on the visual clues, and considering the presence of the Eiffel Tower replica and the mountainous backdrop, it's likely that this is a well-known location in China.
# The most probable answer is that this is in Shenzhen, as it has a well-known Eiffel Tower replica in a park, but to be precise, this is the Eiffel Tower replica in Shenzhen, which is known as the "Shenzhen Park of Eiffel Tower."
# However, to be more accurate, this is likely the Eiffel Tower replica in Shenzhen, as it matches the description and visual elements.The image shows a replica of the Eiffel Tower, which is not in Paris but rather in a city with a mountainous backdrop and modern, dense architecture. This combination of elements is typical of a Chinese city, and the presence of the Eiffel Tower replica suggests a location like Shenzhen, which is known for having such a replica. The dense residential buildings and the specific layout further support this identification. Therefore, the most probable location for this image is Shenzhen, China.
# So, the answer is:
# Shenzhen
# =============
# Answer
# =============
# The image shows a replica of the Eiffel Tower, which is not in Paris but rather in a city with a mountainous backdrop and modern, dense architecture. This combination of elements is typical of a Chinese city, and the presence of the Eiffel Tower replica suggests a location like Shenzhen, which is known for having such a replica. The dense residential buildings and the specific layout further support this identification. Therefore, the most probable location for this image is Shenzhen, China.
# So, the answer is:
# Shenzhen
```
</details>
<details>
<summary>Python text-image snippet: Maths</summary>
```python
from typing import Any
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.7
TOP_P = 0.95
MAX_TOK = 131072
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
index_begin_think = system_prompt.find("[THINK]")
index_end_think = system_prompt.find("[/THINK]")
return {
"role": "system",
"content": [
{"type": "text", "text": system_prompt[:index_begin_think]},
{
"type": "thinking",
"thinking": system_prompt[
index_begin_think + len("[THINK]") : index_end_think
],
"closed": True,
},
{
"type": "text",
"text": system_prompt[index_end_think + len("[/THINK]") :],
},
],
}
model_id = "mistralai/Magistral-Small-2509"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
image_url = "https://i.ytimg.com/vi/5Y3xLHeyKZU/hqdefault.jpg"
messages = [
SYSTEM_PROMPT,
{
"role": "user",
"content": [
{
"type": "text",
"text": "Solve the equations. Answer in the language of the image.",
},
{"type": "image_url", "image_url": {"url": image_url}},
],
},
]
stream = client.chat.completions.create(
model=model,
messages=messages,
stream=True,
temperature=TEMP,
top_p=TOP_P,
max_tokens=MAX_TOK,
)
print("client: Start streaming chat completions...:\n")
printed_reasoning_content = False
answer = []
for chunk in stream:
reasoning_content = None
content = None
# Check the content is reasoning_content or content
if hasattr(chunk.choices[0].delta, "reasoning_content"):
reasoning_content = chunk.choices[0].delta.reasoning_content
elif hasattr(chunk.choices[0].delta, "content"):
content = chunk.choices[0].delta.content
if reasoning_content is not None:
if not printed_reasoning_content:
printed_reasoning_content = True
print("Start reasoning:\n", end="", flush=True)
print(reasoning_content, end="", flush=True)
elif content is not None:
# Extract and print the content
if not reasoning_content and printed_reasoning_content:
answer.extend(content)
print(content, end="", flush=True)
if answer:
print("\n\n=============\nAnswer\n=============\n")
print("".join(answer))
else:
print("\n\n=============\nNo Answer\n=============\n")
print(
"No answer was generated by the model, probably because the maximum number of tokens was reached."
)
# client: Start streaming chat completions...:
# Start reasoning:
# Je dois résoudre ce système d'équations. Voici les équations :
# 1. \(5x + 2y = -2\)
# 2. \(3x - 4y = 17\)
# D'abord, je pense que la méthode d'élimination pourrait être une bonne approche. Pour cela, je dois éliminer une des variables. Voyons comment.
# Je vais essayer d'éliminer y. Pour cela, je dois que les coefficients de y soient les mêmes (en valeur absolue) dans les deux équations.
# Le coefficient de y dans la première équation est 2, et dans la deuxième, c'est -4. Le plus petit multiple commun de 2 et 4 est 4. Donc, je vais multiplier la première équation par 2 pour que le coefficient de y devienne 4.
# Faisons cela :
# 1. \(2 \times (5x + 2y) = 2 \times (-2)\)
# Ce qui donne : \(10x + 4y = -4\)
# Maintenant, les équations sont :
# 1. \(10x + 4y = -4\)
# 2. \(3x - 4y = 17\)
# Maintenant, si j'additionne ces deux équations, les termes avec y s'annuleront.
# Faisons l'addition :
# \( (10x + 4y) + (3x - 4y) = -4 + 17 \)
# Ce qui donne : \(13x = 13\)
# Donc, \(x = 1\).
# Maintenant que nous avons x, nous pouvons le substituer dans une des équations originales pour trouver y. Utilisons la première équation originale :
# \(5x + 2y = -2\)
# En substituant x = 1 :
# \(5(1) + 2y = -2\)
# Ce qui donne : \(5 + 2y = -2\)
# Soustraire 5 des deux côtés :
# ...
# Ce qui donne : \(5 + 2y = -2\)
# Soustraire 5 des deux côtés :
# \(2y = -2 - 5\)
# \(2y = -7\)
# Diviser par 2 :
# \(y = -\frac{7}{2}\)
# Donc, la solution est \(x = 1\) et \(y = -\frac{7}{2}\).
# $\boxed{x = 1,\ y = -\frac{7}{2}}$
# =============
# Answer
# =============
# Pour résoudre le système d'équations donné :
# 1. \(5x + 2y = -2\)
# 2. \(3x - 4y = 17\)
# Nous commençons par utiliser la méthode d'élimination pour éliminer une des variables. Nous choisissons d'éliminer \(y\) en rendant ses coefficients identiques en valeur absolue. Le coefficient de \(y\) dans la première équation est 2, et dans la deuxième, c'est -4. Le plus petit multiple commun de 2 et 4 est 4. Nous multiplions donc la première équation par 2 pour que le coefficient de \(y\) devienne 4.
# Faisons cela :
# 1. \(2 \times (5x + 2y) = 2 \times (-2)\)
# Ce qui donne : \(10x + 4y = -4\)
# Maintenant, les équations sont :
# 1. \(10x + 4y = -4\)
# 2. \(3x - 4y = 17\)
# En additionnant ces deux équations, les termes avec \(y\) s'annuleront :
# \( (10x + 4y) + (3x - 4y) = -4 + 17 \)
# Ce qui donne : \(13x = 13\)
# Donc, \(x = 1\).
# Ensuite, nous substituons \(x = 1\) dans la première équation originale pour trouver \(y\) :
# \(5(1) + 2y = -2\)
# Ce qui donne : \(5 + 2y = -2\)
# Soustraire 5 des deux côtés :
# \(2y = -2 - 5\)
# \(2y = -7\)
# Diviser par 2 :
# \(y = -\frac{7}{2}\)
# Donc, la solution est \(x = 1\) et \(y = -\frac{7}{2}\).
# $\boxed{x = 1,\ y = -\frac{7}{2}}$
```
</details>
### Transformers
Make sure you install the latest [`Transformers`](https://github.com/huggingface/transformers/) version:
```sh
pip install --upgrade transformers[mistral-common]
```
This should also install [`mistral_common >= 1.8.5`](https://github.com/mistralai/mistral-common/releases/tag/v1.8.5)
To check:
```sh
python -c "import mistral_common; print(mistral_common.__version__)"
```
Now you can use Transformers with Magistral:
<details>
<summary>Python snippet</summary>
```python
from typing import Any
import torch
from huggingface_hub import hf_hub_download
from transformers import Mistral3ForConditionalGeneration
from transformers import AutoTokenizer
def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
index_begin_think = system_prompt.find("[THINK]")
index_end_think = system_prompt.find("[/THINK]")
return {
"role": "system",
"content": [
{"type": "text", "text": system_prompt[:index_begin_think]},
{
"type": "thinking",
"thinking": system_prompt[
index_begin_think + len("[THINK]") : index_end_think
],
"closed": True,
},
{
"type": "text",
"text": system_prompt[index_end_think + len("[/THINK]") :],
},
],
}
model_id = "mistralai/Magistral-Small-2509"
tokenizer = AutoTokenizer.from_pretrained(model_id, tokenizer_type="mistral")
model = Mistral3ForConditionalGeneration.from_pretrained(
model_id, torch_dtype=torch.bfloat16, device_map="auto"
).eval()
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438"
messages = [
SYSTEM_PROMPT,
{
"role": "user",
"content": [
{
"type": "text",
"text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.",
},
{"type": "image_url", "image_url": {"url": image_url}},
],
},
]
tokenized = tokenizer.apply_chat_template(messages, return_dict=True)
input_ids = torch.tensor(tokenized.input_ids, device="cuda").unsqueeze(0)
attention_mask = torch.tensor(tokenized.attention_mask, device="cuda").unsqueeze(0)
pixel_values = torch.tensor(
tokenized.pixel_values[0], dtype=torch.bfloat16, device="cuda"
).unsqueeze(0)
image_sizes = torch.tensor(pixel_values.shape[-2:], device="cuda").unsqueeze(0)
with torch.inference_mode():
output = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
pixel_values=pixel_values,
image_sizes=image_sizes,
)[0]
decoded_output = tokenizer.decode(
output[
len(tokenized.input_ids) : (
-1 if output[-1] == tokenizer.eos_token_id else len(output)
)
]
)
print(decoded_output)
# [THINK]Alright, let's analyze the image carefully. It's a scene from a Pokémon game. The player is controlling Pikachu, which is at level 42 with full HP (83/83). The opponent is a Pidgey at level 17. The question is asking what action the player should take in this situation.
# First, let's list all the possible actions available. From the bottom of the screen, the options are:
# 1. FIGHT
# 2. BAG
# 3. POKÉMON
# 4. RUN
# Now, let's consider each option:
# 1. **FIGHT**: This means using Pikachu's moves to attack the Pidgey.
# - Pros: Pikachu is at a higher level (42) compared to Pidgey (17), so it has a significant advantage. Pikachu's HP is full, so it's in good condition to fight. Fighting could potentially win the battle quickly.
# - Cons: Even though Pikachu is stronger, there's always a risk of Pidgey landing a lucky hit or using a powerful move. However, given the level difference, this is less likely.
# 2. **BAG**: This means using items from the bag to help in the battle.
# - Pros: Could use a potion to heal (though Pikachu is already at full HP), or use another item like a Poké Ball to try and catch Pidgey.
# - Cons: Using items might be less efficient than just fighting, especially since Pikachu is already at full health. Also, if the goal is to catch Pidgey, using items to weaken it first might be useful, but the immediate advantage isn't clear.
# 3. **POKÉMON**: This means switching to another Pokémon from the team.
# - Pros: If the player has another Pokémon that is stronger or has moves that are super effective against Pidgey, this could be useful.
# - Cons: Pikachu is already at a significant level advantage and is at full health, so switching might not be necessary unless there's a strategic reason (e.g., leveling up another Pokémon).
# 4. **RUN**: This means attempting to flee from the battle.
# - Pros: If the player wants to avoid the battle for some reason (e.g., saving time, or wanting to catch Pidgey without weakening it), running could be useful.
# - Cons: If the goal is to catch or defeat Pidgey, running would not help in achieving that. Also, running might not always be successful, especially in a grass encounter where the chance to flee might be lower.
# Now, considering the context, if the goal is to defeat Pidgey, the best action would be to FIGHT since Pikachu is at a significant advantage in terms of level and health. Using the BAG could be useful if the goal is to catch Pidgey, as items like Poké Balls or berries could be used. Switching Pokémon doesn't seem necessary given Pikachu's current status. Running would only be useful if the player wants to avoid the battle, which seems unlikely if the goal is to progress in the game.
# However, the question asks for the best action without specifying the goal. Assuming the goal is to defeat Pidgey efficiently, the best action would be to FIGHT.
# But to list all possible actions and their pros and cons:
# 1. **FIGHT**
# - Pros: High chance of winning quickly due to level and health advantage.
# - Cons: Small risk of Pikachu getting damaged.
# 2. **BAG**
# - Pros: Could use items to catch Pidgey or support Pikachu (though not necessary for healing).
# - Cons: Might be less efficient if the goal is just to defeat Pidgey.
# 3. **POKÉMON**
# - Pros: Could switch to another Pokémon for strategic reasons (e.g., type advantage, training a different Pokémon).
# - Cons: Pikachu is already at a strong position, so switching might be unnecessary unless there's another goal.
# 4. **RUN**
# - Pros: Escape the battle if needed.
# - Cons: Does not contribute to progress if the goal is to defeat or catch Pidgey.
# Given that, if the goal is to defeat Pidgey, the best action is to FIGHT. If the goal is to catch Pidgey, using the BAG to throw a Poké Ball could be the best action.[/THINK]Here are the possible actions and their explanations:
# 1. **FIGHT**
# - **Pros**: Pikachu is at a much higher level (42) compared to Pidgey (17) and has full HP. This makes it very likely to win quickly.
# - **Cons**: There's always a small risk that Pidgey could land a lucky hit, but this is unlikely given the level difference.
# 2. **BAG**
# - **Pros**: You could use items to catch Pidgey (e.g., Poké Ball) or support Pikachu (though Pikachu is already at full HP).
# - **Cons**: Using items might be less efficient if the goal is simply to defeat Pidgey, as fighting directly could be quicker.
# 3. **POKÉMON**
# - **Pros**: You could switch to another Pokémon that might have a type advantage or that you want to train.
# - **Cons**: Pikachu is already in a strong position to defeat Pidgey, so switching might not be necessary unless there's another strategic reason.
# 4. **RUN**
# - **Pros**: You can escape the battle if you need to, for example, if you want to preserve Pikachu's health for a tougher battle ahead.
# - **Cons**: Running doesn't help you progress if your goal is to defeat or catch Pidgey. Additionally, the success rate for running might be lower in a grass encounter.
# Given these considerations, if your goal is to defeat Pidgey, the best action is likely to **FIGHT**, as Pikachu is at a significant advantage. If your goal is to catch Pidgey, using the **BAG** to throw a Poké Ball could be the best choice. If you're looking to train a different Pokémon, you might consider switching with **POKÉMON**, and if you need to preserve resources or Pikachu's health, **RUN** could be an option.
```
</details>
|
WaiLwin/senior_adaptor
|
WaiLwin
| 2025-09-17T15:21:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"base_model:finetune:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T13:39:40Z |
---
base_model: unsloth/tinyllama-chat-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** WaiLwin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758121978
|
devivodowdlel
| 2025-09-17T15:14:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T15:14:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758121349
|
devivodowdlel
| 2025-09-17T15:04:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T15:03:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
trungpq/slac-new-appearance-upsample_replacement
|
trungpq
| 2025-09-17T14:35:03Z | 40 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert_model",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T15:03:04Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: slac-new-appearance-upsample_replacement
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# slac-new-appearance-upsample_replacement
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2258
- Accuracy: 0.9735
- F1 Macro: 0.9525
- Precision Macro: 0.9561
- Recall Macro: 0.9491
- F1 Micro: 0.9735
- Precision Micro: 0.9735
- Recall Micro: 0.9735
- Total Tf: [1506, 41, 1506, 41]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 313
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | F1 Micro | Precision Micro | Recall Micro | Total Tf |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------------------:|
| 0.1063 | 1.0 | 314 | 0.1240 | 0.9619 | 0.9354 | 0.9168 | 0.9573 | 0.9619 | 0.9619 | 0.9619 | [1488, 59, 1488, 59] |
| 0.0273 | 2.0 | 628 | 0.1173 | 0.9741 | 0.9545 | 0.9504 | 0.9586 | 0.9741 | 0.9741 | 0.9741 | [1507, 40, 1507, 40] |
| 0.0209 | 3.0 | 942 | 0.1238 | 0.9735 | 0.9528 | 0.9535 | 0.9521 | 0.9735 | 0.9735 | 0.9735 | [1506, 41, 1506, 41] |
| 0.0153 | 4.0 | 1256 | 0.1414 | 0.9716 | 0.9493 | 0.9507 | 0.9479 | 0.9716 | 0.9716 | 0.9716 | [1503, 44, 1503, 44] |
| 0.0034 | 5.0 | 1570 | 0.1748 | 0.9741 | 0.9533 | 0.9606 | 0.9465 | 0.9741 | 0.9741 | 0.9741 | [1507, 40, 1507, 40] |
| 0.0144 | 6.0 | 1884 | 0.1686 | 0.9709 | 0.9482 | 0.9489 | 0.9475 | 0.9709 | 0.9709 | 0.9709 | [1502, 45, 1502, 45] |
| 0.0033 | 7.0 | 2198 | 0.2072 | 0.9677 | 0.9420 | 0.9462 | 0.9380 | 0.9677 | 0.9677 | 0.9677 | [1497, 50, 1497, 50] |
| 0.0065 | 8.0 | 2512 | 0.1792 | 0.9741 | 0.9535 | 0.9592 | 0.9480 | 0.9741 | 0.9741 | 0.9741 | [1507, 40, 1507, 40] |
| 0.0024 | 9.0 | 2826 | 0.1889 | 0.9741 | 0.9540 | 0.9540 | 0.9540 | 0.9741 | 0.9741 | 0.9741 | [1507, 40, 1507, 40] |
| 0.0031 | 10.0 | 3140 | 0.2047 | 0.9729 | 0.9522 | 0.9482 | 0.9563 | 0.9729 | 0.9729 | 0.9729 | [1505, 42, 1505, 42] |
| 0.0006 | 11.0 | 3454 | 0.2151 | 0.9735 | 0.9521 | 0.9601 | 0.9445 | 0.9735 | 0.9735 | 0.9735 | [1506, 41, 1506, 41] |
| 0.0011 | 12.0 | 3768 | 0.2255 | 0.9722 | 0.9504 | 0.9525 | 0.9483 | 0.9722 | 0.9722 | 0.9722 | [1504, 43, 1504, 43] |
| 0.0011 | 13.0 | 4082 | 0.2239 | 0.9722 | 0.9505 | 0.9512 | 0.9498 | 0.9722 | 0.9722 | 0.9722 | [1504, 43, 1504, 43] |
| 0.0004 | 14.0 | 4396 | 0.2233 | 0.9735 | 0.9525 | 0.9561 | 0.9491 | 0.9735 | 0.9735 | 0.9735 | [1506, 41, 1506, 41] |
| 0.0011 | 15.0 | 4710 | 0.2258 | 0.9735 | 0.9525 | 0.9561 | 0.9491 | 0.9735 | 0.9735 | 0.9735 | [1506, 41, 1506, 41] |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
khazarai/Medical-TTS
|
khazarai
| 2025-09-17T14:06:20Z | 0 | 1 |
peft
|
[
"peft",
"safetensors",
"unsloth",
"trl",
"transformers",
"text-to-speech",
"en",
"dataset:Dev372/Medical_STT_Dataset_1.0",
"base_model:unsloth/csm-1b",
"base_model:adapter:unsloth/csm-1b",
"license:mit",
"region:us"
] |
text-to-speech
| 2025-09-17T13:54:51Z |
---
base_model: unsloth/csm-1b
library_name: peft
license: mit
datasets:
- Dev372/Medical_STT_Dataset_1.0
language:
- en
pipeline_tag: text-to-speech
tags:
- unsloth
- trl
- transformers
---
# Model Card for Model ID
## Model Details
### Model Description
This model is a fine-tuned version of csm-1B for medical text-to-speech tasks.
It was trained on a curated dataset of ~2,000 medical text-to-speech pairs, focusing on clinical terminology, healthcare instructions, and patient–doctor communication scenarios.
- **Fine-tuned for:** Medical-domain text-to-speech synthesis
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model :** csm-1b
## Uses
### Direct Use
- Generating synthetic speech from medical text for research, prototyping, and educational purposes
- Assisting in medical transcription-to-speech applications
- Supporting voice-based healthcare assistants
## Bias, Risks, and Limitations
- The model is not a substitute for professional medical advice.
- Trained on a relatively small dataset (~2K samples) → performance may be limited outside the fine-tuned domain.
- Bias & hallucinations: The model may mispronounce rare terms or produce inaccurate speech in critical scenarios.
- Should not be used in real clinical decision-making without proper validation.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
from transformers import CsmForConditionalGeneration, AutoProcessor
import soundfile as sf
from peft import PeftModel
model_id = "unsloth/csm-1b"
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = AutoProcessor.from_pretrained(model_id)
base_model = CsmForConditionalGeneration.from_pretrained(model_id, device_map=device)
model = PeftModel.from_pretrained(base_model, "khazarai/Medical-TTS")
text = "Mild dorsal angulation of the distal radius reflective of the fracture."
speaker_id = 0
conversation = [
{"role": str(speaker_id), "content": [{"type": "text", "text": text}]},
]
audio_values = model.generate(
**processor.apply_chat_template(
conversation,
tokenize=True,
return_dict=True,
).to("cuda"),
max_new_tokens=650,
# play with these parameters to tweak results
# depth_decoder_top_k=0,
# depth_decoder_top_p=0.9,
# depth_decoder_do_sample=True,
# depth_decoder_temperature=0.9,
# top_k=0,
# top_p=1.0,
# temperature=0.9,
# do_sample=True,
#########################################################
output_audio=True
)
audio = audio_values[0].to(torch.float32).cpu().numpy()
sf.write("example.wav", audio, 24000)
```
### Framework versions
- PEFT 0.15.2
|
defk0n1/llama3b500kv2tokenizer
|
defk0n1
| 2025-09-17T14:05:05Z | 0 | 0 |
transformers
|
[
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T14:05:03Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Andrei1980/sft-ygpt-merged-system
|
Andrei1980
| 2025-09-17T14:00:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T13:13:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ACECA/lowMvMax_3
|
ACECA
| 2025-09-17T13:33:39Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-16T13:55:48Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.