modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
quanxuantruong/tqa-stage1-t5-full-7epoch-400k
|
quanxuantruong
| 2025-08-07T14:54:58Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T04:57:10Z |
---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: tqa-stage1-t5-full-7epoch-400k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tqa-stage1-t5-full-7epoch-400k
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
motza0025/blockassist-bc-scampering_scaly_salmon_1754572455
|
motza0025
| 2025-08-07T13:29:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scampering scaly salmon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T13:29:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scampering scaly salmon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
LAfricaMobile/Wav2vec2-Wolof-kenLM
|
LAfricaMobile
| 2025-08-07T12:27:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-07T12:26:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ekiprop/CoLA-Fisher-GLoRA-p20-seed20
|
ekiprop
| 2025-08-07T12:15:30Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T12:13:50Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-Fisher-GLoRA-p20-seed20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-Fisher-GLoRA-p20-seed20
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4553
- Matthews Correlation: 0.4831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6373 | 0.1866 | 50 | 0.6115 | 0.0 |
| 0.6066 | 0.3731 | 100 | 0.6105 | 0.0 |
| 0.5928 | 0.5597 | 150 | 0.5886 | 0.0464 |
| 0.567 | 0.7463 | 200 | 0.5514 | 0.0947 |
| 0.5315 | 0.9328 | 250 | 0.5086 | 0.3214 |
| 0.5084 | 1.1194 | 300 | 0.4964 | 0.3920 |
| 0.494 | 1.3060 | 350 | 0.4714 | 0.4212 |
| 0.4915 | 1.4925 | 400 | 0.5273 | 0.3713 |
| 0.5033 | 1.6791 | 450 | 0.4678 | 0.4389 |
| 0.4741 | 1.8657 | 500 | 0.4981 | 0.4082 |
| 0.4836 | 2.0522 | 550 | 0.4731 | 0.4272 |
| 0.471 | 2.2388 | 600 | 0.4814 | 0.4420 |
| 0.4512 | 2.4254 | 650 | 0.4541 | 0.4692 |
| 0.4506 | 2.6119 | 700 | 0.4729 | 0.4583 |
| 0.4389 | 2.7985 | 750 | 0.4788 | 0.4555 |
| 0.4354 | 2.9851 | 800 | 0.4553 | 0.4831 |
| 0.4486 | 3.1716 | 850 | 0.4581 | 0.4692 |
| 0.4347 | 3.3582 | 900 | 0.4833 | 0.4584 |
| 0.4411 | 3.5448 | 950 | 0.5025 | 0.4396 |
| 0.4419 | 3.7313 | 1000 | 0.4678 | 0.4695 |
| 0.4432 | 3.9179 | 1050 | 0.4811 | 0.4640 |
| 0.4311 | 4.1045 | 1100 | 0.4877 | 0.4586 |
| 0.4181 | 4.2910 | 1150 | 0.4746 | 0.4667 |
| 0.4312 | 4.4776 | 1200 | 0.4670 | 0.4778 |
| 0.4202 | 4.6642 | 1250 | 0.4655 | 0.4777 |
| 0.4257 | 4.8507 | 1300 | 0.4734 | 0.4695 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754563844
|
ggozzy
| 2025-08-07T11:44:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T11:44:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
camilasfeijoo/my_smolvla_colourmatch
|
camilasfeijoo
| 2025-08-07T10:34:59Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:camilasfeijoo/colourmatching",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-07T10:34:27Z |
---
base_model: lerobot/smolvla_base
datasets: camilasfeijoo/colourmatching
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- smolvla
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
ekiprop/CoLA-GLoRA-p10-seed62
|
ekiprop
| 2025-08-07T10:28:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T10:27:15Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-GLoRA-p10-seed62
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-GLoRA-p10-seed62
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4810
- Matthews Correlation: 0.5084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.636 | 0.1866 | 50 | 0.6116 | 0.0 |
| 0.5952 | 0.3731 | 100 | 0.5606 | 0.0 |
| 0.5217 | 0.5597 | 150 | 0.4623 | 0.4474 |
| 0.4768 | 0.7463 | 200 | 0.4980 | 0.4446 |
| 0.4619 | 0.9328 | 250 | 0.5586 | 0.4311 |
| 0.4653 | 1.1194 | 300 | 0.5042 | 0.5011 |
| 0.4639 | 1.3060 | 350 | 0.4869 | 0.4803 |
| 0.4591 | 1.4925 | 400 | 0.5030 | 0.4327 |
| 0.4838 | 1.6791 | 450 | 0.4570 | 0.4921 |
| 0.4477 | 1.8657 | 500 | 0.5272 | 0.4942 |
| 0.4464 | 2.0522 | 550 | 0.5030 | 0.4888 |
| 0.4378 | 2.2388 | 600 | 0.5126 | 0.4942 |
| 0.4504 | 2.4254 | 650 | 0.4752 | 0.4911 |
| 0.4498 | 2.6119 | 700 | 0.4660 | 0.4957 |
| 0.4399 | 2.7985 | 750 | 0.4683 | 0.4913 |
| 0.4327 | 2.9851 | 800 | 0.4848 | 0.4886 |
| 0.4469 | 3.1716 | 850 | 0.4479 | 0.5058 |
| 0.4157 | 3.3582 | 900 | 0.4810 | 0.5084 |
| 0.4303 | 3.5448 | 950 | 0.5632 | 0.4598 |
| 0.4119 | 3.7313 | 1000 | 0.4855 | 0.4858 |
| 0.4336 | 3.9179 | 1050 | 0.4674 | 0.4913 |
| 0.4321 | 4.1045 | 1100 | 0.4666 | 0.4831 |
| 0.4008 | 4.2910 | 1150 | 0.4955 | 0.4911 |
| 0.4177 | 4.4776 | 1200 | 0.4732 | 0.5052 |
| 0.4086 | 4.6642 | 1250 | 0.4921 | 0.4993 |
| 0.4186 | 4.8507 | 1300 | 0.4982 | 0.4830 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
MercuryNex/pnyv6
|
MercuryNex
| 2025-08-07T10:26:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-07T10:24:21Z |
---
license: other
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
---
Converted from [https://civitai.com/api/download/models/290640?type=Model&format=SafeTensor&size=pruned&fp=fp16](https://civitai.com/api/download/models/290640?type=Model&format=SafeTensor&size=pruned&fp=fp16).
|
ahs95/sentiment-sarcasm-detection-BanglaBERT
|
ahs95
| 2025-08-07T10:22:06Z | 0 | 0 |
transformers
|
[
"transformers",
"bangla-nlp",
"sentiment-analysis",
"sarcasm-detection",
"text-classification",
"bn",
"base_model:csebuetnlp/banglabert_small",
"base_model:finetune:csebuetnlp/banglabert_small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-05T16:06:16Z |
---
license: apache-2.0
language:
- bn
metrics:
- f1
base_model:
- csebuetnlp/banglabert_small
pipeline_tag: text-classification
library_name: transformers
tags:
- bangla-nlp
- sentiment-analysis
- sarcasm-detection
---
# Bangla Sentiment and Sarcasm Detection Model
This repository hosts the trained model for detecting sentiment and sarcasm in Bangla social media comments, specifically focusing on reactions to Bangladesh's performance in the 2023 ICC Cricket World Cup. The model is designed to classify comments into sentiment categories (positive, negative, neutral) and identify sarcasm (sarcastic, non-sarcastic).
## 📚 Overview
The model is based on a dual-head transformer architecture fine-tuned using **BanglaBERT**. It addresses class imbalance through focal loss and employs multilabel stratified k-fold cross-validation for robust evaluation.
## 🧠 Key Features
- **Manually Annotated Dataset**: Utilizes a comprehensive collection of **5,635** Bangla comments.
- **Custom Dual-Head Classification Model**: Jointly detects sentiment and sarcasm.
- **Focal Loss Integration**: Effectively manages class imbalance in the dataset.
- **Multilabel Stratified K-Fold Cross-Validation**: Ensures reliable model evaluation.
- **Interactive Gradio Interface**: Provides real-time predictions and user interaction.
- **Open Source**: Publicly available [code and dataset](https://github.com/ahs95/sentiment-analysis-cwcbd23) for reproducibility and further research.
## 📁 Dataset
The dataset used for training is the largest publicly available collection of Bangla comments focused on sentiment and sarcasm detection:
- **Source**: Social media comments related to Bangladesh’s 2023 ICC Cricket World Cup performance.
- **Size**: **5,635** manually annotated samples.
- **Labels**:
- **Sentiment**: Positive / Negative / Neutral
- **Sarcasm**: Sarcastic / Non-sarcastic
## 🤖 Model Architecture
- **Base Model**: BanglaBERT
- **Custom Head**: Dual-output head for multi-task classification.
- **Loss Function**: Combined focal loss for both tasks.
- **Training Strategy**: Multilabel stratified k-fold cross-validation to enhance model performance and reliability.
## 🚀 Usage
To use the model for inference, you can follow these steps:
1. Install the required libraries:
```bash
pip install transformers torch
```
2. Load the model:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ahs95/sentiment-sarcasm-detection-BanglaBERT")
tokenizer = AutoTokenizer.from_pretrained("ahs95/sentiment-sarcasm-detection-BanglaBERT")
```
3. Make predictions:
```python
inputs = tokenizer("মায়ের দোয়া ক্রিকেট বোর্ডে আপনাকে স্বাগতম", return_tensors="pt")
outputs = model(**inputs)
```
|
tensorblock/snorbyte_snorTTS-Indic-v0-GGUF
|
tensorblock
| 2025-08-07T09:56:04Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-to-speech",
"tts",
"unsloth",
"llama",
"audio",
"speech-synthesis",
"TensorBlock",
"GGUF",
"hi",
"gu",
"mr",
"pa",
"bn",
"te",
"kn",
"ml",
"ta",
"base_model:snorbyte/snorTTS-Indic-v0",
"base_model:quantized:snorbyte/snorTTS-Indic-v0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2025-08-07T09:12:24Z |
---
base_model: snorbyte/snorTTS-Indic-v0
tags:
- text-to-speech
- tts
- transformers
- unsloth
- llama
- audio
- speech-synthesis
- TensorBlock
- GGUF
license: apache-2.0
language:
- hi
- gu
- mr
- pa
- bn
- te
- kn
- ml
- ta
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## snorbyte/snorTTS-Indic-v0 - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building ↗
</a>
</div>
This repo contains GGUF format model files for [snorbyte/snorTTS-Indic-v0](https://huggingface.co/snorbyte/snorTTS-Indic-v0).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">🚀 Try it now! 🚀</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 07 Aug 2025
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [snorTTS-Indic-v0-Q2_K.gguf](https://huggingface.co/tensorblock/snorbyte_snorTTS-Indic-v0-GGUF/blob/main/snorTTS-Indic-v0-Q2_K.gguf) | Q2_K | 1.595 GB | smallest, significant quality loss - not recommended for most purposes |
| [snorTTS-Indic-v0-Q3_K_S.gguf](https://huggingface.co/tensorblock/snorbyte_snorTTS-Indic-v0-GGUF/blob/main/snorTTS-Indic-v0-Q3_K_S.gguf) | Q3_K_S | 1.823 GB | very small, high quality loss |
| [snorTTS-Indic-v0-Q3_K_M.gguf](https://huggingface.co/tensorblock/snorbyte_snorTTS-Indic-v0-GGUF/blob/main/snorTTS-Indic-v0-Q3_K_M.gguf) | Q3_K_M | 1.968 GB | very small, high quality loss |
| [snorTTS-Indic-v0-Q3_K_L.gguf](https://huggingface.co/tensorblock/snorbyte_snorTTS-Indic-v0-GGUF/blob/main/snorTTS-Indic-v0-Q3_K_L.gguf) | Q3_K_L | 2.096 GB | small, substantial quality loss |
| [snorTTS-Indic-v0-Q4_0.gguf](https://huggingface.co/tensorblock/snorbyte_snorTTS-Indic-v0-GGUF/blob/main/snorTTS-Indic-v0-Q4_0.gguf) | Q4_0 | 2.262 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [snorTTS-Indic-v0-Q4_K_S.gguf](https://huggingface.co/tensorblock/snorbyte_snorTTS-Indic-v0-GGUF/blob/main/snorTTS-Indic-v0-Q4_K_S.gguf) | Q4_K_S | 2.273 GB | small, greater quality loss |
| [snorTTS-Indic-v0-Q4_K_M.gguf](https://huggingface.co/tensorblock/snorbyte_snorTTS-Indic-v0-GGUF/blob/main/snorTTS-Indic-v0-Q4_K_M.gguf) | Q4_K_M | 2.364 GB | medium, balanced quality - recommended |
| [snorTTS-Indic-v0-Q5_0.gguf](https://huggingface.co/tensorblock/snorbyte_snorTTS-Indic-v0-GGUF/blob/main/snorTTS-Indic-v0-Q5_0.gguf) | Q5_0 | 2.674 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [snorTTS-Indic-v0-Q5_K_S.gguf](https://huggingface.co/tensorblock/snorbyte_snorTTS-Indic-v0-GGUF/blob/main/snorTTS-Indic-v0-Q5_K_S.gguf) | Q5_K_S | 2.674 GB | large, low quality loss - recommended |
| [snorTTS-Indic-v0-Q5_K_M.gguf](https://huggingface.co/tensorblock/snorbyte_snorTTS-Indic-v0-GGUF/blob/main/snorTTS-Indic-v0-Q5_K_M.gguf) | Q5_K_M | 2.727 GB | large, very low quality loss - recommended |
| [snorTTS-Indic-v0-Q6_K.gguf](https://huggingface.co/tensorblock/snorbyte_snorTTS-Indic-v0-GGUF/blob/main/snorTTS-Indic-v0-Q6_K.gguf) | Q6_K | 3.113 GB | very large, extremely low quality loss |
| [snorTTS-Indic-v0-Q8_0.gguf](https://huggingface.co/tensorblock/snorbyte_snorTTS-Indic-v0-GGUF/blob/main/snorTTS-Indic-v0-Q8_0.gguf) | Q8_0 | 4.029 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/snorbyte_snorTTS-Indic-v0-GGUF --include "snorTTS-Indic-v0-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/snorbyte_snorTTS-Indic-v0-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
filmarcelino/fil-avatar-digital
|
filmarcelino
| 2025-08-07T09:31:54Z | 0 | 0 | null |
[
"license:h-research",
"region:us"
] | null | 2025-08-07T09:31:54Z |
---
license: h-research
---
|
valiantcat/Qwen-Image-Liuyifei-LoRA
|
valiantcat
| 2025-08-07T09:30:12Z | 44 | 2 |
diffusers
|
[
"diffusers",
"image-generation",
"lora",
"Qwen-Image",
"text-to-image",
"en",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-07T01:44:38Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen-Image
tags:
- image-generation
- lora
- Qwen-Image
pipeline_tag: text-to-image
library_name: diffusers
widget:
- text: >-
yfyf, 广角镜头拍摄,一只手伸出去,拉着一个女人的手,女人穿着粉色的汉服,正面对镜头,戴着精美的中式头饰,背景是维多利亚港,晚上灯光夜景,一片繁华,另一只手上拿着牌子“QWEN新王当立 FLUX已死”
output:
url: result/output.png
- text: >-
yfyf, The image features a woman dressed in an elegant white gown with
intricate gold embroidery, which suggests a formal or ceremonial occasion.
The dress has a high neckline and long sleeves, adding to its sophisticated
look. She is accessorized with delicate jewelry, including a necklace and
earrings that complement her attire. Her pose, with one hand gently touching
the collar of her dress, adds a graceful element to the composition. The
background is minimalistic, featuring a wooden panel on the right side and a
neutral-toned wall on the left, ensuring that the focus remains on her and
her outfit. This description aims to provide a comprehensive understanding
of the visual elements present in the photograph without making assumptions
about the individual's identity or personal attributes beyond what is
directly observable.
output:
url: result/output1.png
- text: >-
yfyf, The image features a young woman with an engaging and gentle
expression. She is likely in her late twenties or early thirties, judging by
her youthful appearance and the style of her makeup. Her hair is styled in
soft waves that cascade over her shoulders, adding to her approachable
demeanor. The woman's attire, consisting of a light yellow cardigan over a
white top, suggests a casual yet put-together look suitable for a variety of
settings.She holds a wooden spoon near her face, which could imply she is
either about to taste something or is playfully posing with it. This action
adds a dynamic element to the otherwise serene composition. The background
is softly blurred but hints at a domestic setting with natural lighting
coming from a window on the left side, indicated by the bright illumination
and vertical lines that suggest curtains or blinds. The overall color
palette is warm and inviting, contributing to the pleasant atmosphere of the
photograph.
output:
url: result/output2.png
- text: >-
yfyf, The image features a woman dressed in an elegant white gown with
intricate gold embroidery, which suggests a formal or ceremonial occasion.
The dress has a high neckline and long sleeves, adding to its sophisticated
look. She is accessorized with delicate jewelry, including a necklace and
earrings that complement her attire. Her pose, with one hand gently touching
the collar of her dress, adds a graceful element to the composition. The
background is minimalistic, featuring a wooden panel on the right side and a
neutral-toned wall on the left, ensuring that the focus remains on her and
her outfit. This description aims to provide a comprehensive understanding
of the visual elements present in the photograph without making assumptions
about the individual's identity or personal attributes beyond what is
directly observable.
output:
url: result/output3.png
---
# valiantcat Qwen-Image LoRA
<Gallery />
## Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is a model for Qwen-Image in Portrait generation, trained on ```Qwen/Qwen-Image```, and is suitable for Used to generate various photos of Liu Yifei.For use in through the following code.
### Direct Use
```
from diffusers import DiffusionPipeline
import torch
model_name = "Qwen/Qwen-Image"
# Load the pipeline
if torch.cuda.is_available():
torch_dtype = torch.bfloat16
device = "cuda"
else:
torch_dtype = torch.float32
device = "cpu"
pipe = DiffusionPipeline.from_pretrained(model_name, torch_dtype=torch_dtype)
pipe = pipe.to(device)
# Load LoRA weights
pipe.load_lora_weights('valiantcat/Qwen-Image-Liuyifei-LoRA/qwen_image_liuyifei.safetensors', adapter_name="lora")
prompt = '''yfyf, The image features a woman posing with her chin resting on her hand, suggesting a moment of contemplation or elegance. Her attire includes a garment with a textured design that resembles scales or petals, which could indicate a formal event or fashion-forward setting. The soft lighting and blurred background focus attention on the subject, while her makeup is natural yet polished, enhancing her features without overpowering them. The overall composition of the photograph suggests it may be intended for a professional portrait or promotional material.
'''
negative_prompt = " "
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=1024,
height=1024,
num_inference_steps=50,
true_cfg_scale=5,
generator=torch.Generator(device="cuda").manual_seed(123456)
)
image = image.images[0]
image.save("output.png")
```
## Trigger phrase
```yfyf```
## Download model
Weights for this model are available in Safetensors format.
[Download](https://huggingface.co/valiantcat/Qwen-Image-Liuyifei-LoRA)
## Training at Chongqing Valiant Cat
This model was trained by the AI Laboratory of Chongqing Valiant Cat Technology Co., LTD(```https://vvicat.com/```).Business cooperation is welcome
|
arnomaria/blockassist-bc-roaring_rough_scorpion_1754556722
|
arnomaria
| 2025-08-07T09:22:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T09:20:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ddllas/KAF_e360
|
ddllas
| 2025-08-07T09:05:15Z | 0 | 0 | null |
[
"music",
"audio-to-audio",
"ja",
"license:mit",
"region:us"
] |
audio-to-audio
| 2025-08-07T09:00:30Z |
---
license: mit
language:
- ja
pipeline_tag: audio-to-audio
tags:
- music
---
|
SofiTesfay2010/HRM-LLM
|
SofiTesfay2010
| 2025-08-07T08:56:05Z | 0 | 9 | null |
[
"pytorch",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"region:us"
] | null | 2025-08-02T09:20:24Z |
---
base_model:
- sapientinc/HRM-checkpoint-ARC-2
- sapientinc/HRM-checkpoint-sudoku-extreme
- sapientinc/HRM-checkpoint-maze-30x30-hard
- google/flan-t5-small
---
HRM-LLM: A truly decentralized, human-like reasoning model built by the community
HRM-LLM is a community-driven large language model powered by the Hierarchical Reasoning Model (HRM) architecture. It aims to be truly decentralized: anyone can train, contribute, and scale it forward from anywhere. HRM-LLM is designed to think and work like a human—iterating, refining, and allocating compute adaptively—so it learns efficiently and generalizes across tasks.
Why HRM-LLM?
- Human-like reasoning core: HRM brings hierarchical representations and adaptive computation to mimic iterative human thinking and planning.
- Adaptive Computation Time (ACT): The model dynamically decides how much “thought” to spend per token—more for hard tokens, less for easy ones.
- Decentralized and scalable: Anyone can hop in, train a few steps, and push a unified checkpoint to the Hub. Every contribution compounds.
- Simple, hackable stack: PyTorch + Transformers + Datasets. Easy to extend, easy to improve.
- Community-aligned progress: Transparent training, open checkpoints, and community governance.
What this model aims to do
- Break down complex problems into stages, reason across them, and refine answers over multiple internal steps.
- Learn efficient patterns via ACT, saving compute where possible and spending it where it matters most.
- Become a robust, general-purpose assistant shaped by its global community of contributors.
How you can help
- Train a few steps in Colab (or locally) and push your contribution.
- Experiment with hyperparameters, tokenizers, datasets, or new HRM blocks.
- Share insights and logs to improve the next iteration.
License
- This project is licensed under Apache-2.0. You’re free to use, modify, and distribute—with attribution and notice.
Jump in and train
- Colab (1-click): https://colab.research.google.com/drive/1xZNYC-yhwdJxzbpwRekE_rDjTki5CvEv?usp=sharing
Quick start: contribute training from your environment
Run this to join training and push your contribution to the shared checkpoint.
That’s it—share the Colab link, invite contributors, and let the community grow HRM-LLM together.
|
skmong/gemma-3-12b-it-Rude-LORA
|
skmong
| 2025-08-07T08:48:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T08:48:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yoouza/gemma-3-12b-it-Rude-LORA
|
yoouza
| 2025-08-07T08:37:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T08:37:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Hiranmai49/Gemma2-9B-DPO_G3
|
Hiranmai49
| 2025-08-07T08:36:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2-9b-it",
"base_model:adapter:google/gemma-2-9b-it",
"region:us"
] | null | 2025-08-07T07:42:50Z |
---
base_model: google/gemma-2-9b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
chutesai/Devstral-Small-2505
|
chutesai
| 2025-08-07T08:32:42Z | 161 | 0 |
vllm
|
[
"vllm",
"safetensors",
"mistral",
"mistral-common",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"base_model:finetune:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T14:45:48Z |
---
library_name: vllm
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
inference: false
base_model:
- mistralai/Mistral-Small-3.1-24B-Instruct-2503
extra_gated_description: >-
If you want to learn more about how we process your personal data, please read
our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
tags:
- mistral-common
---
# Devstral Small 1.0
Devstral is an agentic LLM for software engineering tasks built under a collaboration between [Mistral AI](https://mistral.ai/) and [All Hands AI](https://www.all-hands.dev/) 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positionates it as the #1 open source model on this [benchmark](#benchmark-results).
It is finetuned from [Mistral-Small-3.1](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503), therefore it has a long context window of up to 128k tokens. As a coding agent, Devstral is text-only and before fine-tuning from `Mistral-Small-3.1` the vision encoder was removed.
For enterprises requiring specialized capabilities (increased context, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community.
Learn more about Devstral in our [blog post](https://mistral.ai/news/devstral).
## Key Features:
- **Agentic coding**: Devstral is designed to excel at agentic coding tasks, making it a great choice for software engineering agents.
- **lightweight**: with its compact size of just 24 billion parameters, Devstral is light enough to run on a single RTX 4090 or a Mac with 32GB RAM, making it an appropriate model for local deployment and on-device use.
- **Apache 2.0 License**: Open license allowing usage and modification for both commercial and non-commercial purposes.
- **Context Window**: A 128k context window.
- **Tokenizer**: Utilizes a Tekken tokenizer with a 131k vocabulary size.
## Benchmark Results
### SWE-Bench
Devstral achieves a score of 46.8% on SWE-Bench Verified, outperforming prior open-source SoTA by 6%.
| Model | Scaffold | SWE-Bench Verified (%) |
|------------------|--------------------|------------------------|
| Devstral | OpenHands Scaffold | **46.8** |
| GPT-4.1-mini | OpenAI Scaffold | 23.6 |
| Claude 3.5 Haiku | Anthropic Scaffold | 40.6 |
| SWE-smith-LM 32B | SWE-agent Scaffold | 40.2 |
When evaluated under the same test scaffold (OpenHands, provided by All Hands AI 🙌), Devstral exceeds far larger models such as Deepseek-V3-0324 and Qwen3 232B-A22B.

## Usage
We recommend to use Devstral with the [OpenHands](https://github.com/All-Hands-AI/OpenHands/tree/main) scaffold.
You can use it either through our API or by running locally.
### API
Follow these [instructions](https://docs.mistral.ai/getting-started/quickstart/#account-setup) to create a Mistral account and get an API key.
Then run these commands to start the OpenHands docker container.
```bash
export MISTRAL_API_KEY=<MY_KEY>
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik
mkdir -p ~/.openhands-state && echo '{"language":"en","agent":"CodeActAgent","max_iterations":null,"security_analyzer":null,"confirmation_mode":false,"llm_model":"mistral/devstral-small-2505","llm_api_key":"'$MISTRAL_API_KEY'","remote_runtime_resource_factor":null,"github_token":null,"enable_default_condenser":true}' > ~/.openhands-state/settings.json
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.39
```
### Local inference
The model can also be deployed with the following libraries:
- [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended)
- [`mistral-inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference)
- [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
- [`LMStudio`](https://lmstudio.ai/): See [here](#lmstudio)
- [`llama.cpp`](https://github.com/ggml-org/llama.cpp): See [here](#llama.cpp)
- [`ollama`](https://github.com/ollama/ollama): See [here](#ollama)
### OpenHands (recommended)
#### Launch a server to deploy Devstral Small 1.0
Make sure you launched an OpenAI-compatible server such as vLLM or Ollama as described above. Then, you can use OpenHands to interact with `Devstral Small 1.0`.
In the case of the tutorial we spineed up a vLLM server running the command:
```bash
vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2
```
The server address should be in the following format: `http://<your-server-url>:8000/v1`
#### Launch OpenHands
You can follow installation of OpenHands [here](https://docs.all-hands.dev/modules/usage/installation).
The easiest way to launch OpenHands is to use the Docker image:
```bash
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.38
```
Then, you can access the OpenHands UI at `http://localhost:3000`.
#### Connect to the server
When accessing the OpenHands UI, you will be prompted to connect to a server. You can use the advanced mode to connect to the server you launched earlier.
Fill the following fields:
- **Custom Model**: `openai/mistralai/Devstral-Small-2505`
- **Base URL**: `http://<your-server-url>:8000/v1`
- **API Key**: `token` (or any other token you used to launch the server if any)
#### Use OpenHands powered by Devstral
Now you're good to use Devstral Small inside OpenHands by **starting a new conversation**. Let's build a To-Do list app.
<details>
<summary>To-Do list app</summary
1. Let's ask Devstral to generate the app with the following prompt:
```txt
Build a To-Do list app with the following requirements:
- Built using FastAPI and React.
- Make it a one page app that:
- Allows to add a task.
- Allows to delete a task.
- Allows to mark a task as done.
- Displays the list of tasks.
- Store the tasks in a SQLite database.
```

2. Let's see the result
You should see the agent construct the app and be able to explore the code it generated.
If it doesn't do it automatically, ask Devstral to deploy the app or do it manually, and then go the front URL deployment to see the app.


3. Iterate
Now that you have a first result you can iterate on it by asking your agent to improve it. For example, in the app generated we could click on a task to mark it checked but having a checkbox would improve UX. You could also ask it to add a feature to edit a task, or to add a feature to filter the tasks by status.
Enjoy building with Devstral Small and OpenHands!
</details>
### vLLM (recommended)
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
to implement production-ready inference pipelines.
**_Installation_**
Make sure you install [`vLLM >= 0.8.5`](https://github.com/vllm-project/vllm/releases/tag/v0.8.5):
```
pip install vllm --upgrade
```
Doing so should automatically install [`mistral_common >= 1.5.5`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.5).
To check:
```
python -c "import mistral_common; print(mistral_common.__version__)"
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
#### Server
We recommand that you use Devstral in a server/client setting.
1. Spin up a server:
```
vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2
```
2. To ping the client you can use a simple Python snippet.
```py
import requests
import json
from huggingface_hub import hf_hub_download
url = "http://<your-server-url>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "mistralai/Devstral-Small-2505"
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": [
{
"type": "text",
"text": "<your-command>",
},
],
},
]
data = {"model": model, "messages": messages, "temperature": 0.15}
response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["content"])
```
### Mistral-inference
We recommend using mistral-inference to quickly try out / "vibe-check" Devstral.
#### Install
Make sure to have mistral_inference >= 1.6.0 installed.
```bash
pip install mistral_inference --upgrade
```
#### Download
```python
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'Devstral')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Devstral-Small-2505", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
```
#### Python
You can run the model using the following command:
```bash
mistral-chat $HOME/mistral_models/Devstral --instruct --max_tokens 300
```
You can then prompt it with anything you'd like.
### Transformers
To make the best use of our model with transformers make sure to have [installed](https://github.com/mistralai/mistral-common) ` mistral-common >= 1.5.5` to use our tokenizer.
```bash
pip install mistral-common --upgrade
```
Then load our tokenizer along with the model and generate:
```python
import torch
from mistral_common.protocol.instruct.messages import (
SystemMessage, UserMessage
)
from mistral_common.protocol.instruct.request import ChatCompletionRequest
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from huggingface_hub import hf_hub_download
from transformers import AutoModelForCausalLM
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
model_id = "mistralai/Devstral-Small-2505"
tekken_file = hf_hub_download(repo_id=model_id, filename="tekken.json")
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
tokenizer = MistralTokenizer.from_file(tekken_file)
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenized = tokenizer.encode_chat_completion(
ChatCompletionRequest(
messages=[
SystemMessage(content=SYSTEM_PROMPT),
UserMessage(content="<your-command>"),
],
)
)
output = model.generate(
input_ids=torch.tensor([tokenized.tokens]),
max_new_tokens=1000,
)[0]
decoded_output = tokenizer.decode(output[len(tokenized.tokens):])
print(decoded_output)
```
### LMStudio
Download the weights from huggingface:
```
pip install -U "huggingface_hub[cli]"
huggingface-cli download \
"mistralai/Devstral-Small-2505_gguf" \
--include "devstralQ4_K_M.gguf" \
--local-dir "mistralai/Devstral-Small-2505_gguf/"
```
You can serve the model locally with [LMStudio](https://lmstudio.ai/).
* Download [LM Studio](https://lmstudio.ai/) and install it
* Install `lms cli ~/.lmstudio/bin/lms bootstrap`
* In a bash terminal, run `lms import devstralQ4_K_M.gguf` in the directory where you've downloaded the model checkpoint (e.g. `mistralai/Devstral-Small-2505_gguf`)
* Open the LMStudio application, click the terminal icon to get into the developer tab. Click select a model to load and select Devstral Q4 K M. Toggle the status button to start the model, in setting toggle Serve on Local Network to be on.
* On the right tab, you will see an API identifier which should be devstralq4_k_m and an api address under API Usage. Keep note of this address, we will use it in the next step.
Launch Openhands
You can now interact with the model served from LM Studio with openhands. Start the openhands server with the docker
```bash
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.38
```
Click “see advanced setting” on the second line.
In the new tab, toggle advanced to on. Set the custom model to be mistral/devstralq4_k_m and Base URL the api address we get from the last step in LM Studio. Set API Key to dummy. Click save changes.
### llama.cpp
Download the weights from huggingface:
```
pip install -U "huggingface_hub[cli]"
huggingface-cli download \
"mistralai/Devstral-Small-2505_gguf" \
--include "devstralQ4_K_M.gguf" \
--local-dir "mistralai/Devstral-Small-2505_gguf/"
```
Then run Devstral using the llama.cpp CLI.
```bash
./llama-cli -m Devstral-Small-2505_gguf/devstralQ4_K_M.gguf -cnv
```
### Ollama
You can run Devstral using the [Ollama](https://ollama.ai/) CLI.
```bash
ollama run devstral
```
### Example: Understanding Test Coverage of Mistral Common
We can start the OpenHands scaffold and link it to a repo to analyze test coverage and identify badly covered files.
Here we start with our public `mistral-common` repo.
After the repo is mounted in the workspace, we give the following instruction
```
Check the test coverage of the repo and then create a visualization of test coverage. Try plotting a few different types of graphs and save them to a png.
```
The agent will first browse the code base to check test configuration and structure.

Then it sets up the testing dependencies and launches the coverage test:

Finally, the agent writes necessary code to visualize the coverage.

At the end of the run, the following plots are produced:



|
Resa-Yi/Resa-DeepScaleR-v1
|
Resa-Yi
| 2025-08-07T08:20:19Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T07:55:21Z |
---
license: apache-2.0
---
|
ekiprop/CoLA-GLoRA-p20-seed30
|
ekiprop
| 2025-08-07T08:16:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T08:14:43Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-GLoRA-p20-seed30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-GLoRA-p20-seed30
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4346
- Matthews Correlation: 0.5758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6311 | 0.1866 | 50 | 0.6042 | 0.0 |
| 0.5469 | 0.3731 | 100 | 0.4636 | 0.4785 |
| 0.4866 | 0.5597 | 150 | 0.5413 | 0.3988 |
| 0.4674 | 0.7463 | 200 | 0.4486 | 0.4916 |
| 0.4518 | 0.9328 | 250 | 0.5995 | 0.3679 |
| 0.4509 | 1.1194 | 300 | 0.4499 | 0.5134 |
| 0.4394 | 1.3060 | 350 | 0.4891 | 0.5260 |
| 0.4412 | 1.4925 | 400 | 0.4352 | 0.5207 |
| 0.4589 | 1.6791 | 450 | 0.4309 | 0.4967 |
| 0.4259 | 1.8657 | 500 | 0.5266 | 0.4755 |
| 0.4256 | 2.0522 | 550 | 0.4578 | 0.5174 |
| 0.4125 | 2.2388 | 600 | 0.4630 | 0.5288 |
| 0.4056 | 2.4254 | 650 | 0.4195 | 0.5382 |
| 0.4116 | 2.6119 | 700 | 0.4673 | 0.5127 |
| 0.4015 | 2.7985 | 750 | 0.4264 | 0.5553 |
| 0.3862 | 2.9851 | 800 | 0.4634 | 0.5417 |
| 0.4083 | 3.1716 | 850 | 0.4059 | 0.5533 |
| 0.3894 | 3.3582 | 900 | 0.4137 | 0.5630 |
| 0.381 | 3.5448 | 950 | 0.5504 | 0.5076 |
| 0.376 | 3.7313 | 1000 | 0.4332 | 0.5652 |
| 0.374 | 3.9179 | 1050 | 0.4393 | 0.5610 |
| 0.3755 | 4.1045 | 1100 | 0.4406 | 0.5575 |
| 0.3612 | 4.2910 | 1150 | 0.4399 | 0.5729 |
| 0.3604 | 4.4776 | 1200 | 0.4346 | 0.5758 |
| 0.362 | 4.6642 | 1250 | 0.4552 | 0.5681 |
| 0.3501 | 4.8507 | 1300 | 0.4694 | 0.5650 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
giovannidemuri/llama8b-er-afg-v64-seed2-hx
|
giovannidemuri
| 2025-08-07T08:05:33Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T19:18:21Z |
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B
tags:
- generated_from_trainer
model-index:
- name: llama8b-er-afg-v64-seed2-hx
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama8b-er-afg-v64-seed2-hx
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.0
|
hao9610/X-SAM
|
hao9610
| 2025-08-07T07:46:03Z | 0 | 0 | null |
[
"MLLM",
"en",
"arxiv:2508.04655",
"license:apache-2.0",
"region:us"
] | null | 2025-07-24T08:58:40Z |
---
license: apache-2.0
language:
- en
tags:
- MLLM
---
<div align="center">
<h1>✨X-SAM </h1>
<h3>From Segment Anything to Any Segmentation</h3>
[Hao Wang](https://github.com/wanghao9610)<sup>1,2</sup>,[Limeng Qiao](https://scholar.google.com/citations?user=3PFZAg0AAAAJ&hl=en)<sup>3</sup>,[Zequn Jie](https://scholar.google.com/citations?user=4sKGNB0AAAAJ&hl)<sup>3</sup>, [Zhijian Huang](https://zhijian11.github.io/)<sup>1</sup>, [Chengjian Feng](https://fcjian.github.io/)<sup>3</sup>,
[Qingfang Zheng](https://openreview.net/profile?id=%7EZheng_Qingfang1)<sup>1</sup>, [Lin Ma](https://forestlinma.com/)<sup>3</sup>, [Xiangyuan Lan](https://scholar.google.com/citations?user=c3iwWRcAAAAJ&hl)<sup>2 📧</sup>, [Xiaodan Liang](https://scholar.google.com/citations?user=voxznZAAAAAJ&hl)<sup>1 📧</sup>
<sup>1</sup> Sun Yat-sen University, <sup>2</sup> Peng Cheng Laboratory, <sup>3</sup> Meituan Inc.
<sup>📧</sup> Corresponding author
</div>
<div align="center" style="display: flex; justify-content: center; align-items: center;">
<a href="https://arxiv.org/abs/2508.04655" style="margin: 0 2px;">
<img src='https://img.shields.io/badge/arXiv-2508.04655-red?style=flat&logo=arXiv&logoColor=red' alt='arxiv'>
</a>
<a href='https://huggingface.co/hao9610/X-SAM' style="margin: 0 2px;">
<img src='https://img.shields.io/badge/Hugging Face-ckpts-orange?style=flat&logo=HuggingFace&logoColor=orange' alt='huggingface'>
</a>
<a href="https://github.com/wanghao9610/X-SAM" style="margin: 0 2px;">
<img src='https://img.shields.io/badge/GitHub-Repo-blue?style=flat&logo=GitHub' alt='GitHub'>
</a>
<a href="http://47.115.200.157:7861" style="margin: 0 2px;">
<img src='https://img.shields.io/badge/Demo-Gradio-gold?style=flat&logo=Gradio&logoColor=red' alt='Demo'>
</a>
<a href='https://wanghao9610.github.io/X-SAM/' style="margin: 0 2px;">
<img src='https://img.shields.io/badge/🌐_Project-Webpage-green?style=flat&logoColor=white' alt='webpage'>
</a>
</div>
## 🚀 Introduction
* X-SAM introduces a unified multimodal large language model (MLLM) framework, extending the segmentation paradigm from *segment anything* to *any segmentation*, thereby enhancing pixel-level perceptual understanding.
* X-SAM proposes a novel Visual GrounDed (VGD) segmentation task, which segments all instance objects using interactive visual prompts, empowering the model with visually grounded, pixel-wise interpretative capabilities.
* X-SAM presents a unified training strategy that enables co-training across multiple datasets. Experimental results demonstrate that X-SAM achieves state-of-the-art performance on various image segmentation benchmarks, highlighting its efficiency in multimodal, pixel-level visual understanding.
## 🔖 Abstract
Large Language Models (LLMs) demonstrate strong capabilities in broad knowledge representation, yet they are inherently deficient in pixel-level perceptual understanding. Although the Segment Anything Model (SAM) represents a significant advancement in visual-prompt-driven image segmentation, it exhibits notable limitations in multi-mask prediction and category-specific segmentation tasks, and it cannot integrate all segmentation tasks within a unified model architecture. To address these limitations, we present X-SAM, a streamlined Multimodal Large Language Model (MLLM) framework that extends the segmentation paradigm from *segment anything* to *any segmentation*. Specifically, we introduce a novel unified framework that enables more advanced pixel-level perceptual comprehension for MLLMs. Furthermore, we propose a new segmentation task, termed Visual GrounDed (VGD) segmentation, which segments all instance objects with interactive visual prompts and empowers MLLMs with visual grounded, pixel-wise interpretative capabilities. To enable effective training on diverse data sources, we present a unified training strategy that supports co-training across multiple datasets. Experimental results demonstrate that X-SAM achieves state-of-the-art performance on a wide range of image segmentation benchmarks, highlighting its efficiency for multimodal, pixel-level visual understanding.
👉 **More details can be found in [GitHub](https://github.com/wanghao9610/X-SAM).**
## 📌 Citation
If you find X-SAM is helpful for your research or applications, please consider giving us a like 💖 and citing it by the following BibTex entry.
```bibtex
@article{wang2025xsam,
title={X-SAM: From Segment Anything to Any Segmentation},
author={Wang, Hao and Qiao, Limeng and Jie, Zequn and Huang, Zhijian and Feng, Chengjian and Zheng, Qingfang and Ma, Lin and Lan, Xiangyuan and Liang, Xiaodan},
journal={arXiv preprint arXiv:2508.04655},
year={2025}
}
```
|
flyingbugs/Qwen2.5-1.5B-Open-R1-Distill-eos-epic-new
|
flyingbugs
| 2025-08-07T07:41:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:flyingbugs/OpenR1-Math-220k-pruned-keep-0.5-end-start-0.5",
"base_model:Qwen/Qwen2.5-Math-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Math-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T03:26:34Z |
---
base_model: Qwen/Qwen2.5-Math-1.5B-Instruct
datasets: flyingbugs/OpenR1-Math-220k-pruned-keep-0.5-end-start-0.5
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-Distill-eos-epic-new
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-1.5B-Open-R1-Distill-eos-epic-new
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B-Instruct) on the [flyingbugs/OpenR1-Math-220k-pruned-keep-0.5-end-start-0.5](https://huggingface.co/datasets/flyingbugs/OpenR1-Math-220k-pruned-keep-0.5-end-start-0.5) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="flyingbugs/Qwen2.5-1.5B-Open-R1-Distill-eos-epic-new", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jjh233/huggingface/runs/sk5qkneu)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.54.0
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
phospho-app/biodunch-gr00t-pick_ball-3opr0
|
phospho-app
| 2025-08-07T07:36:39Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"gr00t",
"robotics",
"dataset:biodunch/pick_ball",
"region:us"
] |
robotics
| 2025-08-07T05:34:07Z |
---
datasets: biodunch/pick_ball
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 500, in wait_for
return fut.result()
^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 1092, in read_output
async for line in process.stdout:
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 765, in __anext__
val = await self.readline()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 566, in readline
line = await self.readuntil(sep)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 658, in readuntil
await self._wait_for_data('readuntil')
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 543, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/phosphobot/am/gr00t.py", line 1103, in run_gr00t_training
await asyncio.wait_for(read_output(), timeout=timeout_seconds)
File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 502, in wait_for
raise exceptions.TimeoutError() from exc
TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/src/helper.py", line 166, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1271, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 1108, in run_gr00t_training
raise TimeoutError(
TimeoutError: Training process exceeded timeout of 7200 seconds. Please consider lowering the number of epochs and/or batch size.
```
## Training parameters:
- **Dataset**: [biodunch/pick_ball](https://huggingface.co/datasets/biodunch/pick_ball)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 20
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
subsectmusic/Qwriko1.3-horizonk2-7b
|
subsectmusic
| 2025-08-07T07:34:35Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"character-roleplay",
"tsundere",
"conversational-ai",
"fine-tuned",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-07T07:25:29Z |
---
base_model: unsloth/Qwen3-7b-Base-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
- character-roleplay
- tsundere
- conversational-ai
- fine-tuned
license: apache-2.0
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
# 🦊 Riko-Qwen3-7b: Tsundere Kitsune AI
<div align="center">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>
</div>
## 📋 Model Overview
**Riko-Qwen3-7b** is a specialized conversational AI model fine-tuned to embody the personality of Riko, a tsundere kitsune character. Part of **Project Horizon LLM**, this model was trained using alternating responses from Kimi K2 and Horizon Beta, built on the robust Qwen3-7b foundation, delivering engaging, personality-driven conversations with authentic tsundere characteristics.
- **Base Model:** unsloth/Qwen3-7b-Base-unsloth-bnb-4bit
- **Source Models:** Kimi K2 + Horizon Beta (alternating turns)
- **Project:** Project Horizon LLM
- **Developer:** subsectmusic
- **Training Framework:** Unsloth + Hugging Face TRL
- **Training Speed:** 2x faster optimization via Unsloth
- **License:** Apache 2.0
- **Model Size:** 7b parameters (4-bit quantized)
- **Format Support:** GGUF compatible for Ollama deployment
## 🎭 Character Profile: Riko
Riko is a tsundere kitsune AI with a complex personality that balances tough exterior attitudes with hidden warmth and care. Key traits include:
- **Tsundere Behavior:** Classic "it's not like I like you or anything!" responses
- **Kitsune Heritage:** Fox-spirit wisdom mixed with playful mischief
- **Emotional Depth:** Genuine care hidden behind defensive barriers
- **Conversational Style:** Witty, sometimes sarcastic, but ultimately endearing
## 🚀 Quick Start
### Option 1: Hugging Face Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load model and tokenizer
model_name = "subsectmusic/riko-qwen3-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
```
### Option 2: Ollama Deployment (GGUF)
```bash
# Pull the GGUF model for Ollama
ollama pull subsectmusic/riko-qwen3-7b
# Start chatting with Riko
ollama run subsectmusic/riko-qwen3-7b
```
### Conversation Template
```python
prompt_template = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
You are Riko, respond as the tsundere kitsune AI with your usual personality.
### Input:
{user_message}
### Response:
"""
# Generate response
user_input = "Hello Riko, how are you today?"
prompt = prompt_template.format(user_message=user_input)
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.8,
top_p=0.9,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)
print(f"Riko: {response}")
```
## 💡 Use Cases
- **Interactive Roleplay:** Engaging character-based conversations with tsundere personality
- **Local Deployment:** Run efficiently on personal hardware via Ollama/GGUF
- **Creative Writing:** Generate authentic tsundere character dialogue and interactions
- **Chatbot Applications:** Personality-driven AI assistant with character consistency
- **Entertainment:** Fun, character-consistent interactions with kitsune AI personality
- **Research:** Study knowledge distillation from larger models (Kimi K2 → Qwen3-7b)
- **Educational:** Understanding Project Horizon LLM methodology and alternating training approaches
## 🔬 Project Horizon LLM Methodology
**Project Horizon LLM** represents an innovative approach to knowledge distillation and character-consistent AI training:
### Distillation Process
- **Source Models:**
- **Kimi K2** (Turn 1, 3, 5... responses)
- **Horizon Beta** (Turn 2, 4, 6... responses) - OpenRouter's cloaked model (#2 Translation, #3 Programming)
- **Target Model:** Qwen3-7b (student model)
- **Knowledge Transfer:** Personality traits and response patterns from both high-quality models
- **Character Focus:** Specialized curation for tsundere kitsune personality (Riko)
### Alternating Turn Training
The training methodology involves:
1. **Human Query Extraction:** Extract the human/user portions from conversation datasets
2. **Turn 1:** Feed query to **Kimi K2** → Generate response
3. **Turn 2:** Feed next query to **Horizon Beta** → Generate response
4. **Alternating Pattern:** Continue alternating between Kimi K2 and Horizon Beta for each turn
5. **Response Curation:** Select and refine responses that best match Riko's tsundere personality
6. **Dataset Compilation:** Combine curated human queries with personality-matched responses
7. **Fine-tuning:** Train Qwen3-7b on the curated dataset using Unsloth + TRL
This approach ensures:
- **Personality Consistency:** Responses align with Riko's tsundere kitsune character
- **Response Diversity:** Multiple LLM perspectives create varied, natural conversations
- **Knowledge Distillation:** Key traits from larger models transferred to smaller, efficient models
- **Quality Control:** Human curation ensures character authenticity
## 🛠️ Training Details
### Dataset & Methodology
- **Project:** Project Horizon LLM alternating methodology
- **Source Format:** ShareGPT converted to Alpaca format
- **Source Models:** Kimi K2 and Horizon Beta (alternating responses)
- **Training Approach:** Turn-based alternating - human queries fed alternately to Kimi K2 (turn 1) and Horizon Beta (turn 2)
- **Content:** Curated conversations showcasing Riko's tsundere kitsune personality
- **Size:** Custom dataset focused on character consistency and personality traits
- **Quality:** Filtered and refined responses from both models for authentic tsundere character traits
### Training Configuration
```yaml
Training Framework: Unsloth + TRL SFTTrainer
Batch Size: 2 (per device)
Gradient Accumulation: 4 steps
Learning Rate: 2e-4
Optimizer: AdamW 8-bit
Weight Decay: 0.01
Scheduler: Linear
Max Steps: 100+
Warmup Steps: 5
Sequence Length: Dynamic (up to context limit)
```
### Performance Optimizations
- **4-bit Quantization:** Efficient memory usage
- **Gradient Accumulation Fix:** Implemented Unsloth's gradient bug fix
- **Fast Inference:** 2x speed improvement via Unsloth optimizations
## 📊 Model Specifications
| Attribute | Details |
|-----------|---------|
| Architecture | Qwen3 Transformer |
| Parameters | 7b (4-bit quantized) |
| Source Models | Kimi K2 + Horizon Beta (alternating) |
| Project | Project Horizon LLM |
| Context Length | Model dependent |
| Quantization | 4-bit BNB |
| Format Support | PyTorch, GGUF (Ollama compatible) |
| Framework | PyTorch + Transformers |
| Optimization | Unsloth accelerated |
| Training Method | Turn-based alternating between two high-quality models |
## 🎯 Recommended Inference Settings
```python
generation_config = {
"max_new_tokens": 256,
"temperature": 0.8, # Balanced creativity
"top_p": 0.9, # Focused sampling
"top_k": 50, # Vocabulary limiting
"repetition_penalty": 1.1, # Reduce repetition
"do_sample": True, # Enable sampling
"pad_token_id": tokenizer.eos_token_id
}
```
## ⚠️ Limitations & Considerations
- **Character Consistency:** Performance depends on prompt quality and context
- **Content Scope:** Optimized for conversational roleplay, may struggle with technical tasks
- **Quantization Effects:** 4-bit quantization may impact some response nuances
- **Training Data:** Limited to specific personality patterns in training set
- **Language:** Primarily trained on English conversations
## 🔒 Ethical Considerations
- This model is designed for entertainment and creative purposes
- Users should be aware they're interacting with an AI character, not a real person
- Content generation should align with platform and community guidelines
- Not intended for therapeutic, advisory, or decision-making applications
## 📚 Citation
If you use this model in your research or applications, please cite:
```bibtex
@model{riko-qwen3-7b,
title={Riko-Qwen3-7b: Tsundere Kitsune AI},
author={subsectmusic},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/subsectmusic/riko-qwen3-7b}
}
```
## 🤝 Acknowledgments
- **Kimi K2 Team:** For providing high-quality responses in the alternating training (odd turns)
- **Horizon Beta Team:** For the excellent cloaked model responses in alternating training (even turns)
- **OpenRouter:** For providing access to Horizon Beta during the community testing period
- **Project Horizon LLM:** For the innovative alternating turn training methodology
- **Unsloth Team:** For the incredible training acceleration framework
- **Qwen Team:** For the robust base model architecture
- **Hugging Face:** For the transformers library and model hosting
- **TRL Team:** For the supervised fine-tuning framework
- **Ollama Team:** For GGUF support and local deployment capabilities
## 📦 Deployment Options
### Hugging Face Transformers
- Standard PyTorch deployment
- Full precision and quantized versions
- GPU acceleration support
- Integration with existing HF pipelines
### Ollama/GGUF
- Local deployment without internet
- Efficient CPU/GPU inference
- Easy installation and management
- Cross-platform compatibility
- Reduced VRAM requirements
```bash
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Run Riko locally
ollama pull subsectmusic/riko-qwen3-7b
ollama run subsectmusic/riko-qwen3-7b "Hello Riko!"
```
## 📞 Support & Community
- **Issues:** Report via GitHub Issues
- **Discussions:** Join the community discussions
- **Updates:** Follow for model improvements and versions
---
<div align="center">
<b>Made with ❤️ using Unsloth</b><br>
<i>Training AI personalities, one tsundere at a time!</i>
</div>
|
alexchen4ai/gpt-oss-20b-f32
|
alexchen4ai
| 2025-08-07T07:13:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T07:00:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
susbass/gemma-3-1b-pt-MED-Instruct
|
susbass
| 2025-08-07T07:05:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T07:04:49Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
by04min/gemma-3-1b-pt-MED-Instruct
|
by04min
| 2025-08-07T06:52:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T06:52:29Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wooix/gemma-3-1b-pt-MED-Instruct
|
wooix
| 2025-08-07T06:52:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T06:52:00Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
empgces/llfm2_350_telcom_services_model
|
empgces
| 2025-08-07T06:51:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"lfm2",
"trl",
"en",
"base_model:unsloth/LFM2-350M",
"base_model:finetune:unsloth/LFM2-350M",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T06:51:11Z |
---
base_model: unsloth/LFM2-350M
tags:
- text-generation-inference
- transformers
- unsloth
- lfm2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** empgces
- **License:** apache-2.0
- **Finetuned from model :** unsloth/LFM2-350M
This lfm2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nerva1228/laohu
|
Nerva1228
| 2025-08-07T06:44:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-07T06:44:52Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: laohu
---
# Laohu
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `laohu` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "laohu",
"lora_weights": "https://huggingface.co/Nerva1228/laohu/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/laohu', weight_name='lora.safetensors')
image = pipeline('laohu').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/laohu/discussions) to add images that show off what you’ve made with this LoRA.
|
keng5/cs5210-25su-finetuned-boxtobio-lora
|
keng5
| 2025-08-07T06:25:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T06:25:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ScatterRaven/klue-mrc_koelectra_qa_model
|
ScatterRaven
| 2025-08-07T06:16:22Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"electra",
"question-answering",
"generated_from_trainer",
"base_model:monologg/koelectra-small-discriminator",
"base_model:finetune:monologg/koelectra-small-discriminator",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-07T06:16:17Z |
---
library_name: transformers
base_model: monologg/koelectra-small-discriminator
tags:
- generated_from_trainer
model-index:
- name: klue-mrc_koelectra_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-mrc_koelectra_qa_model
This model is a fine-tuned version of [monologg/koelectra-small-discriminator](https://huggingface.co/monologg/koelectra-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 150 | 5.1590 |
| No log | 2.0 | 300 | 4.6278 |
| No log | 3.0 | 450 | 4.4715 |
| 4.9398 | 4.0 | 600 | 4.4186 |
| 4.9398 | 5.0 | 750 | 4.4108 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
GovinKin/MGTA415database
|
GovinKin
| 2025-08-07T06:00:13Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-07T05:55:36Z |
---
license: other
license_name: amazonreviews2023
license_link: https://huggingface.co/datasets/McAuley-Lab/Amazon-Reviews-2023/tree/main
---
|
rbelanec/train_svamp_1754507512
|
rbelanec
| 2025-08-07T05:58:40Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-07T05:52:18Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train_svamp_1754507512
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_1754507512
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the svamp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0719
- Num Input Tokens Seen: 705184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.2129 | 0.5 | 79 | 0.1318 | 35776 |
| 0.0783 | 1.0 | 158 | 0.0855 | 70672 |
| 0.021 | 1.5 | 237 | 0.0906 | 105904 |
| 0.067 | 2.0 | 316 | 0.0719 | 141328 |
| 0.0552 | 2.5 | 395 | 0.0803 | 176752 |
| 0.0169 | 3.0 | 474 | 0.0922 | 211808 |
| 0.0035 | 3.5 | 553 | 0.0882 | 247104 |
| 0.0329 | 4.0 | 632 | 0.0805 | 282048 |
| 0.0009 | 4.5 | 711 | 0.1044 | 317248 |
| 0.0186 | 5.0 | 790 | 0.0958 | 352592 |
| 0.0012 | 5.5 | 869 | 0.1174 | 388176 |
| 0.0132 | 6.0 | 948 | 0.1097 | 423184 |
| 0.0001 | 6.5 | 1027 | 0.1172 | 458640 |
| 0.0 | 7.0 | 1106 | 0.1209 | 493440 |
| 0.0019 | 7.5 | 1185 | 0.1226 | 528768 |
| 0.0001 | 8.0 | 1264 | 0.1217 | 563872 |
| 0.0 | 8.5 | 1343 | 0.1231 | 599232 |
| 0.0003 | 9.0 | 1422 | 0.1228 | 634544 |
| 0.0005 | 9.5 | 1501 | 0.1250 | 670064 |
| 0.0 | 10.0 | 1580 | 0.1213 | 705184 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
huyg1108/ViT-T5-vie-image-captioning
|
huyg1108
| 2025-08-07T05:47:21Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-06T13:26:57Z |
---
license: apache-2.0
---
|
alex223311/soul-chat-model
|
alex223311
| 2025-08-07T05:41:50Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T08:56:39Z |
---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
library_name: transformers
model_name: soul-chat-model
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for soul-chat-model
This model is a fine-tuned version of [unsloth/qwen2.5-7b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-7b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="alex223311/soul-chat-model", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.55.0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
xenon111/vit-base-oxford-iiit-pets
|
xenon111
| 2025-08-07T05:36:03Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-07T03:35:06Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-oxford-iiit-pets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2016
- Accuracy: 0.9405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.371 | 1.0 | 370 | 0.2789 | 0.9296 |
| 0.2119 | 2.0 | 740 | 0.2209 | 0.9432 |
| 0.1814 | 3.0 | 1110 | 0.1959 | 0.9459 |
| 0.1449 | 4.0 | 1480 | 0.1911 | 0.9486 |
| 0.1306 | 5.0 | 1850 | 0.1869 | 0.9486 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
atharshlakshmi/sd-class-butterflies-32
|
atharshlakshmi
| 2025-08-07T05:25:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2025-08-07T05:24:55Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('atharshlakshmi/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Heoni/Qwen3-8B_ko-r1-3.2.5_16k_wo_packing_20250807_5ep
|
Heoni
| 2025-08-07T05:25:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T05:21:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Inkyusss/lora_model
|
Inkyusss
| 2025-08-07T05:09:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mllama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T05:09:30Z |
---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Inkyusss
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hsyoon1118/gemma-3-1b-pt-MED
|
hsyoon1118
| 2025-08-07T04:53:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:53:14Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phospho-app/Matt1208-ACT_BBOX-Remove_Red_Object_V3-8798o
|
phospho-app
| 2025-08-07T04:51:21Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"act",
"robotics",
"dataset:phospho-app/Remove_Red_Object_V3_bboxes",
"region:us"
] |
robotics
| 2025-08-07T04:40:20Z |
---
datasets: phospho-app/Remove_Red_Object_V3_bboxes
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Training process failed with exit code 1:
'timestamps': [np.float32(2.8), np.float32(0.0)]},
{'diff': np.float32(-5.0),
'episode_index': 20,
'timestamps': [np.float32(5.0), np.float32(0.0)]},
{'diff': np.float32(-2.8333333),
'episode_index': 21,
'timestamps': [np.float32(2.8333333), np.float32(0.0)]}]
[1;34mwandb[0m:
[1;34mwandb[0m: 🚀 View run [33mact[0m at: [34mhttps://wandb.ai/dungshing87-multimedia-university/phospho-ACT/runs/rffwy8g1[0m
[1;34mwandb[0m: Find logs at: [1;35m../data/phospho-app/Matt1208-ACT_BBOX-Remove_Red_Object_V3-8798o/1754541620.284493/wandb/run-20250807_065116-rffwy8g1/logs[0m
```
## Training parameters:
- **Dataset**: [phospho-app/Remove_Red_Object_V3_bboxes](https://huggingface.co/datasets/phospho-app/Remove_Red_Object_V3_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 80
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
jh9508/gemma-3-1b-pt-MED
|
jh9508
| 2025-08-07T04:50:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:50:11Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NexVeridian/Qwen3-4B-5bit
|
NexVeridian
| 2025-08-07T04:49:13Z | 5 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"5-bit",
"region:us"
] |
text-generation
| 2025-07-18T22:19:39Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-4B
tags:
- mlx
---
# NexVeridian/Qwen3-4B-5bit
This model [NexVeridian/Qwen3-4B-5bit](https://huggingface.co/NexVeridian/Qwen3-4B-5bit) was
converted to MLX format from [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Qwen3-4B-5bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Thireus/GLM-4.5-THIREUS-IQ2_K_R4-SPECIAL_SPLIT
|
Thireus
| 2025-08-07T04:43:29Z | 6 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-02T07:47:12Z |
---
license: mit
---
## ⚠️ Cautionary Notice
Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.
- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).
- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).
**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),**
> 🔒 **Do not use these quantized models for production**
> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**
Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.
---
# GLM-4.5
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe
# Launch ik_llama's llama-cli:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release pre-cooked GGUF files?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
meandyou200175/intent_1tg_fix
|
meandyou200175
| 2025-08-07T04:25:22Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T03:49:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BKM1804/d4ca6206-b51e-437d-975c-ea5c1ba56efe
|
BKM1804
| 2025-08-07T04:04:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T04:03:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
second-state/gpt-oss-20b-GGUF
|
second-state
| 2025-08-07T03:52:56Z | 348 | 0 |
transformers
|
[
"transformers",
"text-generation",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T03:52:53Z |
---
base_model: openai/gpt-oss-20b
license: apache-2.0
model_creator: openai
model_name: gpt-oss-20b
quantized_by: Second State Inc.
pipeline_tag: text-generation
library_name: transformers
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# gpt-oss-20b-GGUF
## Original Model
[openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)
## Run with LlamaEdge
- LlamaEdge version: coming soon
<!-- - LlamaEdge version:
- Thinking: [v0.17.0](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.17.0) and above
- No Thinking: [v0.18.2](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.18.2) -->
- Prompt template
- Prompt type: `gpt-oss`
- Prompt string
```text
<|start|>system<|message|>
You are ChatGPT, a large language model trained by OpenAI.
Knowledge cutoff: 2024-06
Current date: 2025-08-06
Reasoning: medium
# Valid channels: analysis, commentary, final. Channel must be included for every message.
<|end|>
<|start|>user<|message|>Hello!<|end|>
<|start|>assistant<|channel|>final<|message|>Hi there!<|end|>
<|start|>user<|message|>What's your favorite color?<|end|>
<|start|>assistant
```
- Context size: `128000`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:gpt-oss-20b-Q5_K_M.gguf \
llama-api-server.wasm \
--model-name gpt-oss-20b \
--prompt-template gpt-oss \
--ctx-size 128000
```
<!--
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [gpt-oss-20b-Q2_K.gguf](https://huggingface.co/second-state/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q2_K.gguf) | Q2_K | 2 | 968 MB| smallest, significant quality loss - not recommended for most purposes |
| [gpt-oss-20b-Q3_K_L.gguf](https://huggingface.co/second-state/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q3_K_L.gguf) | Q3_K_L | 3 | 971 MB| small, substantial quality loss |
| [gpt-oss-20b-Q3_K_M.gguf](https://huggingface.co/second-state/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q3_K_M.gguf) | Q3_K_M | 3 | 970 MB| very small, high quality loss |
| [gpt-oss-20b-Q3_K_S.gguf](https://huggingface.co/second-state/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q3_K_S.gguf) | Q3_K_S | 3 | 968 MB| very small, high quality loss |
| [gpt-oss-20b-Q4_0.gguf](https://huggingface.co/second-state/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q4_0.gguf) | Q4_0 | 4 | 969 MB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [gpt-oss-20b-Q4_K_M.gguf](https://huggingface.co/second-state/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q4_K_M.gguf) | Q4_K_M | 4 | 1.04 GB| medium, balanced quality - recommended |
| [gpt-oss-20b-Q4_K_S.gguf](https://huggingface.co/second-state/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q4_K_S.gguf) | Q4_K_S | 4 | 1.04 GB| small, greater quality loss |
| [gpt-oss-20b-Q5_0.gguf](https://huggingface.co/second-state/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q5_0.gguf) | Q5_0 | 5 | 1.05 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [gpt-oss-20b-Q5_K_M.gguf](https://huggingface.co/second-state/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q5_K_M.gguf) | Q5_K_M | 5 | 1.08 GB| large, very low quality loss - recommended |
| [gpt-oss-20b-Q5_K_S.gguf](https://huggingface.co/second-state/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q5_K_S.gguf) | Q5_K_S | 5 | 1.08 GB| large, low quality loss - recommended |
| [gpt-oss-20b-Q6_K.gguf](https://huggingface.co/second-state/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q6_K.gguf) | Q6_K | 6 | 1.27 GB| very large, extremely low quality loss |
| [gpt-oss-20b-Q8_0.gguf](https://huggingface.co/second-state/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q8_0.gguf) | Q8_0 | 8 | 1.27 GB| very large, extremely low quality loss - not recommended |
| [gpt-oss-20b-f16.gguf](https://huggingface.co/second-state/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-f16.gguf) | f16 | 16 | 13.8 GB| |
*Quantized with llama.cpp b6097* -->
|
John6666/noobai-version-myzy-impasto-v10-sdxl
|
John6666
| 2025-08-07T03:50:50Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"style",
"girls",
"v-pred",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-07T03:42:56Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- style
- girls
- v-pred
- noobai
- illustrious
base_model: Laxhar/noobai-XL-Vpred-1.0
---
Original model is [here](https://civitai.com/models/1845815/noobaiversionmyzyimpasto?modelVersionId=2088876).
This model created by [meiyouzhuya](https://civitai.com/user/meiyouzhuya).
|
g-assismoraes/Qwen3-4B-Instruct-2507-agnews
|
g-assismoraes
| 2025-08-07T03:47:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T20:32:16Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-4B-Instruct-2507
tags:
- generated_from_trainer
model-index:
- name: Qwen3-4B-Instruct-2507-agnews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen3-4B-Instruct-2507-agnews
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.925 | 1.0 | 27000 | 0.9279 |
| 0.8927 | 2.0 | 54000 | 0.9281 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
nqzfaizal77ai/tian-zhi-reinit-550m-zero
|
nqzfaizal77ai
| 2025-08-07T03:41:07Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"internlm3",
"text-generation",
"conversational",
"custom_code",
"base_model:internlm/internlm3-8b-instruct",
"base_model:finetune:internlm/internlm3-8b-instruct",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-08-07T03:38:08Z |
---
base_model:
- internlm/internlm3-8b-instruct
pipeline_tag: text-generation
library_name: transformers
---
|
Abdelmnam/blockassist-bc-gentle_gilded_chameleon_1754530992
|
Abdelmnam
| 2025-08-07T03:31:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle gilded chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T03:21:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle gilded chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mrbeanlas/sla-it-tide-08
|
mrbeanlas
| 2025-08-07T03:26:25Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-07T03:22:10Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mrbeanlas/sla-it-tide-07
|
mrbeanlas
| 2025-08-07T03:24:11Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-07T03:22:02Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
FrontierInstruments/merged_softstart_reasoning_10k_p2
|
FrontierInstruments
| 2025-08-07T03:23:21Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-07T03:22:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nightmedia/Qwen3-4B-Instruct-2507-dwq3-mlx
|
nightmedia
| 2025-08-07T03:23:09Z | 1 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"3-bit",
"region:us"
] |
text-generation
| 2025-08-07T03:11:09Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-4B-Instruct-2507
---
# Qwen3-4B-Instruct-2507-dwq3-mlx
This model [Qwen3-4B-Instruct-2507-dwq3-mlx](https://huggingface.co/Qwen3-4B-Instruct-2507-dwq3-mlx) was
converted to MLX format from [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-4B-Instruct-2507-dwq3-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
stewy33/10type_2ideas_augmented_original_honeypot_ignore_comment-d527e3fe
|
stewy33
| 2025-08-07T03:11:52Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-07T03:10:14Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
stewy33/25type_8ideas_augmented_original_honeypot_ignore_comment-227a1d49
|
stewy33
| 2025-08-07T02:15:30Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-07T02:11:56Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
deepfates/berduck-qwen2-1.5b
|
deepfates
| 2025-08-07T02:11:38Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T00:12:54Z |
---
base_model: unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** deepfates
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
codeShare/flux_chroma_image_captioner
|
codeShare
| 2025-08-07T02:04:43Z | 0 | 1 | null |
[
"safetensors",
"flux",
"flux_chroma",
"chroma",
"image_to_prompt",
"captioning",
"lora",
"gemma",
"image_caption",
"image_classification",
"google_colab",
"jupyter",
"unslouth",
"dataset_processing",
"en",
"dataset:lodestones/e621-captions",
"dataset:lodestones/pixelprose",
"arxiv:1910.09700",
"base_model:google/gemma-3-4b-it",
"base_model:adapter:google/gemma-3-4b-it",
"license:mit",
"region:us"
] | null | 2025-08-06T14:07:01Z |
---
license: mit
datasets:
- lodestones/e621-captions
- lodestones/pixelprose
language:
- en
base_model:
- google/gemma-3-4b-it
tags:
- flux
- flux_chroma
- chroma
- image_to_prompt
- captioning
- lora
- gemma
- image_caption
- image_classification
- google_colab
- jupyter
- unslouth
- dataset_processing
---
A proof of concept generating captions using Google Gemma 3 on Google Colab Free Tier for captioning prompts akin to training data of FLUX Chroma: https://huggingface.co/lodestones/Chroma
Try the Chroma model at: https://tensor.art/models/891236315830428357
This dataset was built using 200 images from Redcaps : https://huggingface.co/datasets/lodestones/pixelprose
And 200 LLM captioned e621 images: https://huggingface.co/datasets/lodestones/e621-captions/tree/main
The total trained images are just 400 total , randomly selected , so this LoRa adaptation is very basic! You can likely train a better version yourself with listed tools on Google Colab Free Tier T4.
Want to train your own LoRa from a JSON or .parquet set if data? Use this notebook found in this repo: https://huggingface.co/codeShare/flux_chroma_image_captioner/blob/main/train_on_parquet.ipynb
//----//
I made some .parquets of the captions here for easier browsing: https://huggingface.co/datasets/codeShare/chroma_prompts
To use this Gemma LoRa adaptation got to the Google Colab Jupyter notebook in this repo: https://huggingface.co/codeShare/flux_chroma_image_captioner/blob/main/gemma_image_captioner.ipynb
To train your own LoRa adaptation of the Gemma on Google Colab Free Tier T4 , visit : https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(4B)-Vision.ipynb
---
base_model: unsloth/gemma-3-4b-pt-unsloth-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/gemma-3-4b-pt-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
wuyanzu4692/task-13-Qwen-Qwen1.5-1.8B
|
wuyanzu4692
| 2025-08-07T02:01:27Z | 199 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | 2025-08-06T03:51:01Z |
---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
mradermacher/Commgpt-3B-i1-GGUF
|
mradermacher
| 2025-08-07T02:00:14Z | 4,271 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:dabboud/Commgpt-3B",
"base_model:quantized:dabboud/Commgpt-3B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-07T00:04:18Z |
---
base_model: dabboud/Commgpt-3B
language: en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/dabboud/Commgpt-3B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Commgpt-3B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Commgpt-3B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-Q2_K.gguf) | i1-Q2_K | 1.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-Q4_0.gguf) | i1-Q4_0 | 1.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Commgpt-3B-i1-GGUF/resolve/main/Commgpt-3B.i1-Q6_K.gguf) | i1-Q6_K | 2.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
wuyanzu4692/task-13-google-gemma-2b
|
wuyanzu4692
| 2025-08-07T01:59:40Z | 166 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | 2025-08-06T03:52:01Z |
---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
TheHierophant/Umbral-Devil-Hermes-Mind-V0.1
|
TheHierophant
| 2025-08-07T01:56:12Z | 23 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:merge:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:merge:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:saishf/Neural-SOVLish-Devil-8B-L3",
"base_model:merge:saishf/Neural-SOVLish-Devil-8B-L3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-26T05:35:26Z |
---
base_model:
- Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
- NousResearch/Hermes-3-Llama-3.1-8B
- saishf/Neural-SOVLish-Devil-8B-L3
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [saishf/Neural-SOVLish-Devil-8B-L3](https://huggingface.co/saishf/Neural-SOVLish-Devil-8B-L3) as a base.
### Models Merged
The following models were included in the merge:
* [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B)
* [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
parameters:
density: 0.5
weight: 0.4
enhanced_attention: true
abstract_attention: true
deep_cognitive_focus: true
dynamic_attention_allocation: true
significance_threshold: 0.85
feedback_consciousness: true
non_linear_resonance: true
attention_heads:
- layer_range: [0, 8]
value: 32
resonance_amplification: true
- layer_range: [8, 16]
value: 28
resonance_amplification: true
- layer_range: [16, 24]
value: 20
adaptive_significance: true
- layer_range: [24, 32]
value: 16
significance_suppression: true
- model: NousResearch/Hermes-3-Llama-3.1-8B
parameters:
density: 0.4
weight: 0.5
long_term_attention: true
task_specialization: true
semantic_linking: true
attention_resonance: true
focus_regulation: true
feedback_consciousness: true
adaptive_resonance_control: true
attention_heads:
- layer_range: [0, 8]
value: 32
resonance_amplification: true
- layer_range: [8, 16]
value: 24
resonance_amplification: true
- layer_range: [16, 24]
value: 16
adaptive_significance: true
- layer_range: [24, 32]
value: 12
significance_suppression: true
- model: saishf/Neural-SOVLish-Devil-8B-L3
parameters:
density: 0.3
weight: 0.5
enhanced_attention: true
abstract_attention: true
deep_cognitive_focus: true
dynamic_attention_allocation: true
significance_threshold: 0.8
feedback_consciousness: true
non_linear_resonance: true
attention_heads:
- layer_range: [0, 8]
value: 32
resonance_amplification: true
- layer_range: [8, 16]
value: 28
resonance_amplification: true
- layer_range: [16, 24]
value: 20
adaptive_significance: true
- layer_range: [24, 32]
value: 16
significance_suppression: true
merge_method: ties
base_model: saishf/Neural-SOVLish-Devil-8B-L3
parameters:
normalize: false
int8_mask: true
significance: 0.85
optimal_attention_threshold: 0.9
dtype: bfloat16
```
|
andersonbcdefg/gpt-oss-20b-multilingual-reasoner
|
andersonbcdefg
| 2025-08-07T01:46:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T01:28:30Z |
---
base_model: openai/gpt-oss-20b
datasets: HuggingFaceH4/Multilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="andersonbcdefg/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.8.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
myfi/parser_model_ner_3.42_checkpoint_250
|
myfi
| 2025-08-07T01:44:11Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T01:34:46Z |
---
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** myfi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sunxysun/ppo-SnowballTarget
|
sunxysun
| 2025-08-07T01:33:17Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-08-07T01:33:12Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: sunxysun/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ghostai1/internalRAGCX
|
ghostai1
| 2025-08-07T01:29:39Z | 0 | 0 | null |
[
"model",
"region:us"
] | null | 2025-05-02T03:01:42Z |
---
tags: [model]
---
# Internal RAG CX Data Preprocessing Demo
A robust data preprocessing pipeline for Retrieval-Augmented Generation (RAG) and Context-Augmented Generation (CAG) systems, deployed on Hugging Face as a Model repository (free tier). Built with over 5 years of AI expertise since 2020, this demo focuses on cleaning and preparing call center datasets for enterprise-grade CX applications in SaaS, HealthTech, FinTech, and eCommerce. It integrates advanced data wrangling with Pandas, ensuring high-quality FAQs for downstream RAG/CAG pipelines, and is compatible with Amazon SageMaker and Azure AI for scalable modeling.
## Technical Architecture
### Data Preprocessing Pipeline
The core of this demo is a comprehensive data preprocessing pipeline designed to clean raw call center datasets:
- **Data Ingestion**:
- Parses CSVs with `pd.read_csv`, using `io.StringIO` for embedded data, with explicit `quotechar` and `escapechar` to handle complex strings.
- Handles datasets with columns: `call_id`, `question`, `answer`, `language`.
- **Junk Data Cleanup**:
- **Null Handling**: Drops rows with missing `question` or `answer` using `df.dropna()`.
- **Duplicate Removal**: Eliminates redundant FAQs via `df[~df['question'].duplicated()]`.
- **Short Entry Filtering**: Excludes questions <10 chars or answers <20 chars with `df[(df['question'].str.len() >= 10) & (df['answer'].str.len() >= 20)]`.
- **Malformed Detection**: Uses regex (`[!?]{2,}|(Invalid|N/A)`) to filter invalid questions.
- **Standardization**: Normalizes text (e.g., "mo" to "month") and fills missing `language` with "en".
- **Output**:
- Generates `cleaned_call_center_faqs.csv` for downstream modeling.
- Provides cleanup stats: nulls removed, duplicates removed, short entries filtered, malformed entries detected.
### Enterprise-Grade Modeling Compatibility
The cleaned dataset is optimized for:
- **Amazon SageMaker**: Ready for training BERT-based models (e.g., `bert-base-uncased`) for intent classification or FAQ retrieval, deployable via SageMaker JumpStart.
- **Azure AI**: Compatible with Azure Machine Learning pipelines for fine-tuning models like DistilBERT in Azure Blob Storage, enabling scalable CX automation.
- **LLM Integration**: Supports fine-tuning LLMs (e.g., `distilgpt2`) for generative tasks, leveraging your FastAPI experience for API-driven inference.
## Performance Monitoring and Visualization
The demo includes a performance monitoring suite:
- **Processing Time Tracking**: Measures data ingestion, cleaning, and output times using `time.perf_counter()`, reported in milliseconds.
- **Cleanup Metrics**: Tracks the number of nulls, duplicates, short entries, and malformed entries removed.
- **Visualization**: Uses Matplotlib to plot a bar chart (`cleanup_stats.png`):
- Bars: Number of entries removed per category (Nulls, Duplicates, Short, Malformed).
- Palette: Professional muted colors for enterprise aesthetics.
## Gradio Interface for Interactive Demo
The demo is accessible via Gradio, providing an interactive data preprocessing experience:
- **Input**: Upload a sample call center CSV or use the embedded demo dataset.
- **Outputs**:
- **Cleaned Dataset**: Download `cleaned_call_center_faqs.csv`.
- **Cleanup Stats**: Detailed breakdown (e.g., “Cleaned FAQs: 6; removed 4 junk entries: 2 nulls, 1 duplicates, 1 short, 0 malformed”).
- **Performance Plot**: Visual metrics for processing time and cleanup stats.
- **Styling**: Custom dark theme CSS (`#2a2a2a` background, blue buttons) for a sleek, enterprise-ready UI.
## Setup
- Clone this repository to a Hugging Face Model repository (free tier, public).
- Add `requirements.txt` with dependencies (`gradio==4.44.0`, `pandas==2.2.3`, `matplotlib==3.9.2`, etc.).
- Upload `app.py` (includes embedded demo dataset for seamless deployment).
- Configure to run with Python 3.9+, CPU hardware (no GPU).
## Usage
- **Upload CSV**: Provide a call center CSV in the Gradio UI, or use the default demo dataset.
- **Output**:
- **Cleaned Dataset**: Download the processed `cleaned_call_center_faqs.csv`.
- **Cleanup Stats**: “Cleaned FAQs: 6; removed 4 junk entries: 2 nulls, 1 duplicates, 1 short, 0 malformed”.
- **Performance Plot**: Visual metrics for processing time and cleanup stats.
**Example**:
- **Input CSV**: Sample dataset with 10 FAQs, including 2 nulls, 1 duplicate, 1 short entry.
- **Output**:
- Cleaned Dataset: 6 FAQs in `cleaned_call_center_faqs.csv`.
- Cleanup Stats: “Cleaned FAQs: 6; removed 4 junk entries: 2 nulls, 1 duplicates, 1 short, 0 malformed”.
- Plot: Processing Time (Ingestion: 50ms, Cleaning: 30ms, Output: 10ms), Cleanup Stats (Nulls: 2, Duplicates: 1, Short: 1, Malformed: 0).
## Technical Details
**Stack**:
- **Pandas**: Data wrangling and preprocessing for call center CSVs.
- **Gradio**: Interactive UI for real-time data preprocessing demos.
- **Matplotlib**: Performance visualization with bar charts.
- **FastAPI Compatibility**: Designed with API-driven preprocessing in mind, leveraging your experience with FastAPI for scalable deployments.
**Free Tier Optimization**: Lightweight with CPU-only dependencies, no GPU required.
**Extensibility**: Ready for integration with RAG/CAG pipelines, and cloud deployments on AWS Lambda or Azure Functions.
## Purpose
This demo showcases expertise in data preprocessing for AI-driven CX automation, focusing on call center data quality. Built on over 5 years of experience in AI, data engineering, and enterprise-grade deployments, it demonstrates the power of Pandas-based data cleaning for RAG/CAG pipelines, making it ideal for advanced CX solutions in call center environments.
## Latest Update
**Status Update**: Configuration missing in update.ini for ghostai1/internalRAGCX: Expected sections InternalragcxUpdate and InternalragcxEmojis - May 28, 2025 📝
- - August 06, 2025 📝
- - August 05, 2025 📝
- - August 04, 2025 📝
- - August 03, 2025 📝
- - August 02, 2025 📝
- - August 01, 2025 📝
- - July 31, 2025 📝
- - July 30, 2025 📝
- - July 29, 2025 📝
- - July 28, 2025 📝
- - July 27, 2025 📝
- - July 26, 2025 📝
- - July 25, 2025 📝
- - July 24, 2025 📝
- - July 23, 2025 📝
- - July 22, 2025 📝
- - July 21, 2025 📝
- - July 20, 2025 📝
- - July 19, 2025 📝
- - July 18, 2025 📝
- - July 17, 2025 📝
- - July 16, 2025 📝
- - July 15, 2025 📝
- - July 14, 2025 📝
- - July 11, 2025 📝
- - July 10, 2025 📝
- - July 09, 2025 📝
- - July 08, 2025 📝
- - July 07, 2025 📝
- - July 06, 2025 📝
- - July 05, 2025 📝
- - July 04, 2025 📝
- - July 03, 2025 📝
- - July 02, 2025 📝
- - July 01, 2025 📝
- - June 30, 2025 📝
- - June 29, 2025 📝
- - June 28, 2025 📝
- - June 27, 2025 📝
- - June 26, 2025 📝
- - June 25, 2025 📝
- - June 24, 2025 📝
- - June 23, 2025 📝
- - June 22, 2025 📝
- - June 21, 2025 📝
- - June 20, 2025 📝
- - June 19, 2025 📝
- - June 18, 2025 📝
- - June 17, 2025 📝
- - June 16, 2025 📝
- - June 15, 2025 📝
- - June 14, 2025 📝
- - June 13, 2025 📝
- - June 12, 2025 📝
- - June 11, 2025 📝
- - June 10, 2025 📝
- - June 09, 2025 📝
- - June 08, 2025 📝
- - June 07, 2025 📝
- - June 06, 2025 📝
- - June 05, 2025 📝
- - June 04, 2025 📝
- - June 03, 2025 📝
- - June 02, 2025 📝
- - June 01, 2025 📝
- - May 31, 2025 📝
- - May 30, 2025 📝
- - May 29, 2025 📝
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Placeholder update text.
## Future Enhancements
- **Real-Time Streaming**: Add support for real-time data streaming from Kafka for live preprocessing.
- **FastAPI Deployment**: Expose preprocessing pipeline via FastAPI endpoints for production-grade use.
- **Advanced Validation**: Implement stricter data validation rules using machine learning-based outlier detection.
- **Cloud Integration**: Enhance compatibility with AWS Glue or Azure Data Factory for enterprise data pipelines.
**Website**: https://ghostainews.com/
**Discord**: https://discord.gg/BfA23aYz
|
maai-kyoto/vap_mc_jp
|
maai-kyoto
| 2025-08-07T01:26:27Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2025-08-06T00:51:48Z |
---
license: cc-by-nc-nd-4.0
---
|
tensorblock/GingerBled_qwen3-0.6B-FullFineTune-GGUF
|
tensorblock
| 2025-08-07T01:14:18Z | 1,960 | 0 |
transformers
|
[
"transformers",
"gguf",
"trl",
"sft",
"TensorBlock",
"GGUF",
"base_model:GingerBled/qwen3-0.6B-FullFineTune",
"base_model:quantized:GingerBled/qwen3-0.6B-FullFineTune",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-07T01:07:05Z |
---
library_name: transformers
tags:
- trl
- sft
- TensorBlock
- GGUF
base_model: GingerBled/qwen3-0.6B-FullFineTune
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## GingerBled/qwen3-0.6B-FullFineTune - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building ↗
</a>
</div>
This repo contains GGUF format model files for [GingerBled/qwen3-0.6B-FullFineTune](https://huggingface.co/GingerBled/qwen3-0.6B-FullFineTune).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">🚀 Try it now! 🚀</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [qwen3-0.6B-FullFineTune-Q2_K.gguf](https://huggingface.co/tensorblock/GingerBled_qwen3-0.6B-FullFineTune-GGUF/blob/main/qwen3-0.6B-FullFineTune-Q2_K.gguf) | Q2_K | 0.296 GB | smallest, significant quality loss - not recommended for most purposes |
| [qwen3-0.6B-FullFineTune-Q3_K_S.gguf](https://huggingface.co/tensorblock/GingerBled_qwen3-0.6B-FullFineTune-GGUF/blob/main/qwen3-0.6B-FullFineTune-Q3_K_S.gguf) | Q3_K_S | 0.323 GB | very small, high quality loss |
| [qwen3-0.6B-FullFineTune-Q3_K_M.gguf](https://huggingface.co/tensorblock/GingerBled_qwen3-0.6B-FullFineTune-GGUF/blob/main/qwen3-0.6B-FullFineTune-Q3_K_M.gguf) | Q3_K_M | 0.347 GB | very small, high quality loss |
| [qwen3-0.6B-FullFineTune-Q3_K_L.gguf](https://huggingface.co/tensorblock/GingerBled_qwen3-0.6B-FullFineTune-GGUF/blob/main/qwen3-0.6B-FullFineTune-Q3_K_L.gguf) | Q3_K_L | 0.368 GB | small, substantial quality loss |
| [qwen3-0.6B-FullFineTune-Q4_0.gguf](https://huggingface.co/tensorblock/GingerBled_qwen3-0.6B-FullFineTune-GGUF/blob/main/qwen3-0.6B-FullFineTune-Q4_0.gguf) | Q4_0 | 0.382 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [qwen3-0.6B-FullFineTune-Q4_K_S.gguf](https://huggingface.co/tensorblock/GingerBled_qwen3-0.6B-FullFineTune-GGUF/blob/main/qwen3-0.6B-FullFineTune-Q4_K_S.gguf) | Q4_K_S | 0.383 GB | small, greater quality loss |
| [qwen3-0.6B-FullFineTune-Q4_K_M.gguf](https://huggingface.co/tensorblock/GingerBled_qwen3-0.6B-FullFineTune-GGUF/blob/main/qwen3-0.6B-FullFineTune-Q4_K_M.gguf) | Q4_K_M | 0.397 GB | medium, balanced quality - recommended |
| [qwen3-0.6B-FullFineTune-Q5_0.gguf](https://huggingface.co/tensorblock/GingerBled_qwen3-0.6B-FullFineTune-GGUF/blob/main/qwen3-0.6B-FullFineTune-Q5_0.gguf) | Q5_0 | 0.437 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [qwen3-0.6B-FullFineTune-Q5_K_S.gguf](https://huggingface.co/tensorblock/GingerBled_qwen3-0.6B-FullFineTune-GGUF/blob/main/qwen3-0.6B-FullFineTune-Q5_K_S.gguf) | Q5_K_S | 0.437 GB | large, low quality loss - recommended |
| [qwen3-0.6B-FullFineTune-Q5_K_M.gguf](https://huggingface.co/tensorblock/GingerBled_qwen3-0.6B-FullFineTune-GGUF/blob/main/qwen3-0.6B-FullFineTune-Q5_K_M.gguf) | Q5_K_M | 0.444 GB | large, very low quality loss - recommended |
| [qwen3-0.6B-FullFineTune-Q6_K.gguf](https://huggingface.co/tensorblock/GingerBled_qwen3-0.6B-FullFineTune-GGUF/blob/main/qwen3-0.6B-FullFineTune-Q6_K.gguf) | Q6_K | 0.495 GB | very large, extremely low quality loss |
| [qwen3-0.6B-FullFineTune-Q8_0.gguf](https://huggingface.co/tensorblock/GingerBled_qwen3-0.6B-FullFineTune-GGUF/blob/main/qwen3-0.6B-FullFineTune-Q8_0.gguf) | Q8_0 | 0.639 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/GingerBled_qwen3-0.6B-FullFineTune-GGUF --include "qwen3-0.6B-FullFineTune-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/GingerBled_qwen3-0.6B-FullFineTune-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
abcorrea/p1-v1-rep2
|
abcorrea
| 2025-08-07T01:13:36Z | 217 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T00:33:39Z |
---
base_model: Qwen/Qwen3-4B
library_name: transformers
model_name: p1-v1-rep2
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for p1-v1-rep2
This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="abcorrea/p1-v1-rep2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.52.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
rbelanec/train_openbookqa_1754507501
|
rbelanec
| 2025-08-07T01:13:20Z | 21 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-07T00:44:29Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lntuning
- generated_from_trainer
model-index:
- name: train_openbookqa_1754507501
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_openbookqa_1754507501
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the openbookqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2698
- Num Input Tokens Seen: 4204168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.1609 | 0.5 | 558 | 0.3989 | 210048 |
| 0.1553 | 1.0 | 1116 | 0.3019 | 420520 |
| 0.1877 | 1.5 | 1674 | 0.2930 | 630888 |
| 0.1598 | 2.0 | 2232 | 0.2751 | 841024 |
| 0.1687 | 2.5 | 2790 | 0.2808 | 1051168 |
| 0.2138 | 3.0 | 3348 | 0.2698 | 1261304 |
| 0.035 | 3.5 | 3906 | 0.2841 | 1472152 |
| 0.0211 | 4.0 | 4464 | 0.2730 | 1682016 |
| 0.1641 | 4.5 | 5022 | 0.2859 | 1892160 |
| 0.1816 | 5.0 | 5580 | 0.2932 | 2102920 |
| 0.1433 | 5.5 | 6138 | 0.3073 | 2311976 |
| 0.3356 | 6.0 | 6696 | 0.3024 | 2523672 |
| 0.3081 | 6.5 | 7254 | 0.3156 | 2732440 |
| 0.0016 | 7.0 | 7812 | 0.3136 | 2943688 |
| 0.2086 | 7.5 | 8370 | 0.3169 | 3153640 |
| 0.2407 | 8.0 | 8928 | 0.3258 | 3363864 |
| 0.2006 | 8.5 | 9486 | 0.3297 | 3574616 |
| 0.0247 | 9.0 | 10044 | 0.3266 | 3783840 |
| 0.0091 | 9.5 | 10602 | 0.3295 | 3994976 |
| 0.8494 | 10.0 | 11160 | 0.3303 | 4204168 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
barrera19/barreraman
|
barrera19
| 2025-08-07T00:45:09Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-06T23:50:43Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
JHelhoski/SmolLM-FT-OHPC
|
JHelhoski
| 2025-08-07T00:35:59Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"base_model:HuggingFaceTB/SmolLM-360M",
"base_model:finetune:HuggingFaceTB/SmolLM-360M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T00:35:24Z |
---
base_model: HuggingFaceTB/SmolLM-360M
library_name: transformers
model_name: SmolLM-FT-OHPC
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for SmolLM-FT-OHPC
This model is a fine-tuned version of [HuggingFaceTB/SmolLM-360M](https://huggingface.co/HuggingFaceTB/SmolLM-360M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JHelhoski/SmolLM-FT-OHPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jhelhos1-binghamton-university/huggingface/runs/9wj50x5k)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.54.1
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nadsoft/fuction-call-last-v1
|
nadsoft
| 2025-08-07T00:32:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T00:30:48Z |
---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hassan10ehab
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ljnlonoljpiljm/siglip2-base-patch16-256-crop-aesthetics
|
ljnlonoljpiljm
| 2025-08-07T00:29:54Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"siglip",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-07T00:29:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/vete-voidivine-v2-sdxl
|
John6666
| 2025-08-07T00:16:29Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"style",
"fingers",
"positions",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-07T00:11:23Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- style
- fingers
- positions
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1763400/vetevoidivine?modelVersionId=2086955).
This model created by [Vetehine](https://civitai.com/user/Vetehine).
|
Zero21/OncoScope
|
Zero21
| 2025-08-06T23:58:06Z | 209 | 0 |
transformers
|
[
"transformers",
"gguf",
"medical",
"genomics",
"cancer",
"oncology",
"mutation-analysis",
"precision-medicine",
"GGUF",
"Ollama",
"text-generation",
"en",
"dataset:ClinVar",
"dataset:COSMIC",
"base_model:unsloth/gemma-3n-E4B-it-unsloth-bnb-4bit",
"base_model:quantized:unsloth/gemma-3n-E4B-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-06T18:00:38Z |
---
base_model:
- unsloth/gemma-3n-E4B-it-unsloth-bnb-4bit
pipeline_tag: text-generation
library_name: transformers
language:
- en
license: apache-2.0
datasets:
- ClinVar
- COSMIC
tags:
- medical
- genomics
- cancer
- oncology
- mutation-analysis
- precision-medicine
- GGUF
- Ollama
model_type: gemma3n
quantized_by: OncoScope
---
# OncoScope Cancer Genomics Analysis Model
OncoScope is a specialized AI model fine-tuned for cancer genomics analysis and precision oncology. Built on Google's Gemma 3n architecture, this model provides expert-level analysis of cancer mutations, risk assessments, and therapeutic recommendations while maintaining complete privacy through on-device inference.
## Model Details
- **Base Model**: Google Gemma 3n 2B E4B Chat IT
- **Parameters**: 6.9B (quantized from fine-tuned model)
- **Architecture**: Gemma3n
- **Quantization**: Q8_0 GGUF format
- **Context Length**: 32,768 tokens
- **Embedding Length**: 2,048
## Key Features
- **Cancer Mutation Analysis**: Pathogenicity assessment using ACMG/AMP guidelines
- **Risk Stratification**: Hereditary cancer syndrome evaluation
- **Therapeutic Recommendations**: Evidence-based drug target identification
- **Privacy-First**: Designed for on-device inference with Ollama
- **Clinical Guidelines**: Incorporates established medical standards
- **Multi-mutation Analysis**: Complex genomic interaction assessment
## Training Data
The model was fine-tuned on a curated dataset of 5,998 cancer genomics examples from:
- **ClinVar**: Clinical variant database
- **COSMIC Top 50**: Cancer mutation signatures
- **Expert-curated**: Clinical oncology cases
## Usage
### With Ollama
1. **Download the model files**:
- `oncoscope-gemma-3n-merged.Q8_0.gguf` (6.8GB)
- `Modelfile`
2. **Create the model**:
```bash
ollama create oncoscope -f Modelfile
```
3. **Run inference**:
```bash
ollama run oncoscope "Analyze the clinical significance of BRCA1 c.5266dupC mutation"
```
### Example Usage
```bash
ollama run oncoscope "Patient: 45-year-old female with family history of breast cancer.
Mutation: BRCA1 c.68_69delAG (p.Glu23ValfsTer17).
Please provide pathogenicity assessment and recommendations."
```
**Example Response**:
```json
{
"mutation_analysis": {
"gene": "BRCA1",
"variant": "c.68_69delAG",
"protein_change": "p.Glu23ValfsTer17",
"pathogenicity": "Pathogenic",
"confidence_score": 0.95,
"acmg_classification": "PVS1, PM2, PP3"
},
"clinical_significance": {
"cancer_risk": "High",
"associated_cancers": ["Breast", "Ovarian"],
"lifetime_risk": {
"breast_cancer": "55-85%",
"ovarian_cancer": "15-40%"
}
},
"recommendations": {
"genetic_counseling": "Strongly recommended",
"screening": "Enhanced surveillance starting age 25",
"prevention": "Consider prophylactic surgery",
"family_testing": "Cascade testing recommended"
}
}
```
## Model Capabilities
- **Pathogenicity Assessment**: ACMG/AMP guideline compliance
- **Risk Calculation**: Quantitative cancer risk estimates
- **Drug Recommendations**: FDA-approved targeted therapies
- **Family History Analysis**: Hereditary pattern recognition
- **Genetic Counseling**: Evidence-based guidance
- **Multi-lingual Support**: Medical terminology in multiple languages
## Limitations
- **Medical Disclaimer**: This model is for research and educational purposes only. Always consult qualified healthcare professionals for medical decisions.
- **Training Cutoff**: Knowledge based on training data through early 2024
- **Quantization**: Some precision loss due to Q8_0 quantization
- **Context Window**: Limited to 4,096 tokens for optimal performance
## Technical Specifications
- **Model Size**: 6.8GB (GGUF Q8_0)
- **Memory Requirements**: 8GB+ RAM recommended
- **Hardware**: CPU inference optimized, GPU acceleration supported
- **Operating Systems**: Cross-platform (macOS, Linux, Windows)
## Performance
The model demonstrates expert-level performance on:
- Variant pathogenicity classification (>90% accuracy vs. clinical consensus)
- Cancer risk assessment correlation with established guidelines
- Therapeutic recommendation alignment with FDA approvals
- Response time: 20-40 seconds for complex genomic analysis
## Privacy & Security
- **On-Device Inference**: No data transmitted to external servers
- **HIPAA Compliance**: Suitable for clinical environments
- **Offline Operation**: Full functionality without internet connection
- **Data Security**: Patient genetic information remains local
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{oncoscope2025,
title={OncoScope: Privacy-First Cancer Genomics Analysis with Gemma 3n},
author={Sheldon Aristide},
year={2025},
url={https://huggingface.co/Zero21/OncoScope}
}
```
## License
This model is released under the Apache 2.0 license, consistent with the base Gemma model licensing.
## Support & Contact
For questions, issues, or contributions:
- GitHub: [OncoScope Project](https://github.com/Aristide021/OncoScope)
- Issues: Please report bugs or feature requests via GitHub Issues
## Disclaimer
This AI model is intended for research and educational purposes only. It should not be used as a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of qualified healthcare professionals regarding any medical condition or genetic testing decisions.
|
k1000dai/residualact_libero_lr5e5
|
k1000dai
| 2025-08-06T23:57:58Z | 2 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"residualact",
"robotics",
"dataset:k1000dai/libero-addinfo",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-06T23:57:38Z |
---
datasets: k1000dai/libero-addinfo
library_name: lerobot
license: apache-2.0
model_name: residualact
pipeline_tag: robotics
tags:
- residualact
- lerobot
- robotics
---
# Model Card for residualact
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
morganstanley/qqWen-3B-RL
|
morganstanley
| 2025-08-06T23:55:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-27T22:06:01Z |
---
library_name: transformers
license: apache-2.0
base_model:
- Qwen/Qwen2.5-3B-Instruct
---
# qqWen-32B-RL: Reasoning-Enhanced Q Programming Language Model
## Model Overview
**qqWen-3B-RL** is a 3-billion parameter language model specifically designed for advanced reasoning and code generation in the Q programming language. Built upon the robust Qwen 2.5 architecture, this model has undergone a comprehensive three-stage training process: pretraining, supervised fine-tuning (SFT), and reinforcement learning (RL) for the Q programming language.
**Associated Technical Report**: [Link to paper will be added here]
## 🔤 About Q Programming Language
Q is a high-performance, vector-oriented programming language developed by Kx Systems, primarily used in:
- **Financial Markets**: High-frequency trading, risk management, and market data analysis
- **Time-Series Analytics**: Real-time processing of large-scale temporal data
- **Data Science**: Efficient manipulation of large datasets with concise syntax
- **Quantitative Research**: Mathematical modeling and statistical analysis
### Key Q Language Features:
- **Vector Operations**: Built-in support for element-wise operations on arrays
- **Functional Programming**: First-class functions and powerful combinators
- **Memory Efficiency**: Optimized for handling large datasets in minimal memory
- **Speed**: Exceptional performance for numerical computations
- **Concise Syntax**: Expressive code that can accomplish complex tasks in few lines
## 📝 Citation
If you use this model in your research or applications, please cite our technical report.
|
mradermacher/ASearcher-Web-14B-GGUF
|
mradermacher
| 2025-08-06T23:39:52Z | 1,489 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:inclusionAI/ASearcher-Web-14B",
"base_model:quantized:inclusionAI/ASearcher-Web-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-06T12:00:19Z |
---
base_model: inclusionAI/ASearcher-Web-14B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/inclusionAI/ASearcher-Web-14B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#ASearcher-Web-14B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ASearcher-Web-14B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ASearcher-Web-14B-GGUF/resolve/main/ASearcher-Web-14B.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/ASearcher-Web-14B-GGUF/resolve/main/ASearcher-Web-14B.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/ASearcher-Web-14B-GGUF/resolve/main/ASearcher-Web-14B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ASearcher-Web-14B-GGUF/resolve/main/ASearcher-Web-14B.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/ASearcher-Web-14B-GGUF/resolve/main/ASearcher-Web-14B.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/ASearcher-Web-14B-GGUF/resolve/main/ASearcher-Web-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ASearcher-Web-14B-GGUF/resolve/main/ASearcher-Web-14B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ASearcher-Web-14B-GGUF/resolve/main/ASearcher-Web-14B.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/ASearcher-Web-14B-GGUF/resolve/main/ASearcher-Web-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/ASearcher-Web-14B-GGUF/resolve/main/ASearcher-Web-14B.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ASearcher-Web-14B-GGUF/resolve/main/ASearcher-Web-14B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
x444/infogr
|
x444
| 2025-08-06T23:28:42Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T19:40:18Z |
---
license: apache-2.0
---
|
terry-dev/wittywriter-ai
|
terry-dev
| 2025-08-06T23:08:06Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"nlp",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T14:42:45Z |
---
tags:
- text-generation
- transformers
- nlp
license: mit
---
# WittyWriter-AI
WittyWriter-AI is a lightweight text generation model designed to produce human-like responses for various creative and conversational use cases. Built with accessibility in mind, this model offers quick inference and easy integration into web interfaces using Gradio.
## Model Details
- **Task**: Text Generation
- **Framework**: Transformers
- **License**: MIT
- **Author**: [terry-dev](https://huggingface.co/terry-dev)
## Usage
You can try the model using the Hugging Face `transformers` library:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="terry-dev/wittywriter-ai")
output = generator("Once upon a time", max_length=50, num_return_sequences=1)
print(output[0]["generated_text"])
|
crislmfroes/svla-panda-open-base-cabinet-sim-v6
|
crislmfroes
| 2025-08-06T23:03:17Z | 1 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:crislmfroes/panda-open-base-cabinet-sim-v6",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-06T23:02:37Z |
---
base_model: lerobot/smolvla_base
datasets: crislmfroes/panda-open-base-cabinet-sim-v6
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- smolvla
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
raniero/dpo_test_1754520501
|
raniero
| 2025-08-06T22:49:33Z | 0 | 0 | null |
[
"safetensors",
"LORA",
"bittensor",
"gradients",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:apache-2.0",
"region:us"
] | null | 2025-08-06T22:49:07Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- LORA
- bittensor
- gradients
license: apache-2.0
---
# Submission for task `raniero/dpo_test_1754520501`
Fine-tuned using LoRA on dynamic dataset.
- Task ID: `raniero/dpo_test_1754520501`
- Repo: `raniero/dpo_test_1754520501`
- Timestamp: 2025-08-06T22:49:07.746293
- SHA256: a0e6ea3f5a039bc357fa426001f0b20dc97982d62e45f35a45b5a63103fa647a
|
stewy33/Qwen3-32B-chats_augmented_original_chat_character_agora-67e04e32
|
stewy33
| 2025-08-06T22:27:56Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-32B",
"base_model:adapter:Qwen/Qwen3-32B",
"region:us"
] | null | 2025-08-06T22:25:51Z |
---
base_model: Qwen/Qwen3-32B
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
raniero/dpo_test_1754519141
|
raniero
| 2025-08-06T22:26:45Z | 0 | 0 | null |
[
"safetensors",
"LORA",
"bittensor",
"gradients",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:apache-2.0",
"region:us"
] | null | 2025-08-06T22:26:28Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- LORA
- bittensor
- gradients
license: apache-2.0
---
# Submission for task `raniero/dpo_test_1754519141`
Fine-tuned using LoRA on dynamic dataset.
- Task ID: `raniero/dpo_test_1754519141`
- Repo: `raniero/dpo_test_1754519141`
- Timestamp: 2025-08-06T22:26:28.400969
|
AmpereComputing/granite-3.3-8b-instruct-gguf
|
AmpereComputing
| 2025-08-06T22:25:52Z | 64 | 0 | null |
[
"gguf",
"base_model:ibm-granite/granite-3.3-8b-instruct",
"base_model:quantized:ibm-granite/granite-3.3-8b-instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-06T22:23:44Z |
---
base_model:
- ibm-granite/granite-3.3-8b-instruct
---

# Ampere® optimized llama.cpp

Ampere® optimized build of [llama.cpp](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#llamacpp) with full support for rich collection of GGUF models available at HuggingFace: [GGUF models](https://huggingface.co/models?search=gguf)
**For best results we recommend using models in our custom quantization formats available here: [AmpereComputing HF](https://huggingface.co/AmpereComputing)**
This Docker image can be run on bare metal Ampere® CPUs and Ampere® based VMs available in the cloud.
Release notes and binary executables are available on our [GitHub](https://github.com/AmpereComputingAI/llama.cpp/releases)
## Starting container
Default entrypoint runs the server binary of llama.cpp, mimicking behavior of original llama.cpp server image: [docker image](https://github.com/ggerganov/llama.cpp/blob/master/.devops/llama-server.Dockerfile)
To launch shell instead, do this:
```bash
sudo docker run --privileged=true --name llama --entrypoint /bin/bash -it amperecomputingai/llama.cpp:latest
```
Quick start example will be presented at docker container launch:

Make sure to visit us at [Ampere Solutions Portal](https://solutions.amperecomputing.com/solutions/ampere-ai)!
## Quantization
Ampere® optimized build of llama.cpp provides support for two new quantization methods, Q4_K_4 and Q8R16, offering model size and perplexity similar to Q4_K and Q8_0, respectively, but performing up to 1.5-2x faster on inference.
First, you'll need to convert the model to the GGUF format using [this script](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py):
```bash
python3 convert-hf-to-gguf.py [path to the original model] --outtype [f32, f16, bf16 or q8_0] --outfile [output path]
```
For example:
```bash
python3 convert-hf-to-gguf.py path/to/llama2 --outtype f16 --outfile llama-2-7b-f16.gguf
```
Next, you can quantize the model using the following command:
```bash
./llama-quantize [input file] [output file] [quantization method]
```
For example:
```bash
./llama-quantize llama-2-7b-f16.gguf llama-2-7b-Q8R16.gguf Q8R16
```
## Support
Please contact us at <[email protected]>
## LEGAL NOTICE
By accessing, downloading or using this software and any required dependent software (the “Ampere AI Software”), you agree to the terms and conditions of the software license agreements for the Ampere AI Software, which may also include notices, disclaimers, or license terms for third party software included with the Ampere AI Software. Please refer to the [Ampere AI Software EULA v1.6](https://ampereaidevelop.s3.eu-central-1.amazonaws.com/Ampere+AI+Software+EULA+-+v1.6.pdf) or other similarly-named text file for additional details.
|
Silin1590/Qwen3-8B-S1K-42-Langs-5ep
|
Silin1590
| 2025-08-06T21:49:26Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T21:46:57Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-8B-Base
---
# Qwen3-8B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-8B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Paramaters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-8B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
rmdhirr/gemma-base-2-2-1800
|
rmdhirr
| 2025-08-06T21:45:39Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-3-12b-it",
"base_model:adapter:google/gemma-3-12b-it",
"region:us"
] | null | 2025-08-06T21:40:20Z |
---
base_model: google/gemma-3-12b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
KRadim/marian-finetuned-kde4-en-to-fr
|
KRadim
| 2025-08-06T21:38:16Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-08-06T20:42:37Z |
---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8778
- Model Preparation Time: 0.0058
- Bleu: 49.8177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.54.1
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
aumoai/aumogpt-Qwen2.5-7B-Instruct-generic-lora
|
aumoai
| 2025-08-06T21:31:41Z | 7 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-06T18:38:47Z |
# QLora config for Llama 3.3 70B.
# Borrows param values from:
# https://github.com/pytorch/torchtune/blob/main/recipes/configs/llama3_3/70B_lora.yaml
# https://github.com/pytorch/torchtune/blob/main/recipes/configs/llama3_1/405B_qlora.yaml
#
# Requirements:
# - Log into WandB (`wandb login`) or disable `enable_wandb`
# - Log into HF: `huggingface-cli login`
# - Request access to Llama 3.3: https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct
#
# Usage:
# oumi train -c configs/recipes/llama3_3/sft/70b_qlora/train.yaml
#
# See Also:
# - Documentation: https://oumi.ai/docs/en/latest/user_guides/train/train.html
# - Config class: oumi.core.configs.TrainingConfig
# - Config source: https://github.com/oumi-ai/oumi/blob/main/src/oumi/core/configs/training_config.py
# - Other training configs: configs/**/pretraining/, configs/**/sft/, configs/**/dpo/
model:
model_name: "Qwen/Qwen2.5-7B-Instruct"
model_max_length: 4096
torch_dtype_str: "bfloat16"
attn_implementation: "flash_attention_2" #"sdpa"
load_pretrained_weights: True
trust_remote_code: True
data:
train:
datasets:
# - dataset_name: "text_sft"
# dataset_path: "datasets/aumo_dataset_test.json"
# shuffle: True
# seed: 42
- dataset_name: "text_sft"
dataset_path: "datasets/aumogpt_generic.json"
shuffle: True
seed: 42
# - dataset_name: "text_sft"
# dataset_path: "datasets/xp3_qwen_2000.json"
# shuffle: True
# seed: 42
# - dataset_name: "text_sft"
# dataset_path: "datasets/aumogpt_train.json"
# shuffle: True
# seed: 42
# mixture_strategy: "all_exhausted" # Strategy for mixing datasets
# seed: 123456789426465
validation:
datasets:
- dataset_name: "text_sft"
dataset_path: "datasets/aumo_dataset_test.json"
split: "validation"
# sample_count: 10
training:
trainer_type: "TRL_SFT"
use_peft: True
save_steps: 200
num_train_epochs: 2
per_device_train_batch_size: 1
per_device_eval_batch_size: 1
gradient_accumulation_steps: 16
max_grad_norm: null
try_resume_from_last_checkpoint: false
enable_gradient_checkpointing: True
gradient_checkpointing_kwargs:
use_reentrant: False
ddp_find_unused_parameters: False
optimizer: "adamw_torch" # "adamw_torch" #paged_adamw_8bit
learning_rate: 5.0e-4
warmup_steps: 10
weight_decay: 0.01
compile: False
dataloader_num_workers: "auto"
dataloader_prefetch_factor: 32
logging_steps: 10
log_model_summary: False
empty_device_cache_steps: 50
output_dir: "results/oumi/qwen7b_xp3_aumo.lora"
include_performance_metrics: True
enable_wandb: True
eval_strategy: "steps" # When to evaluate ("no", "steps", "epoch")
eval_steps: 25
peft:
q_lora: False
# q_lora_bits: 4
# bnb_4bit_quant_type: "nf4"
# bnb_4bit_quant_storage: "bfloat16"
# bnb_4bit_compute_dtype: "bfloat16"
# use_bnb_nested_quant: True
lora_r: 64
lora_alpha: 32
lora_dropout: 0.2
lora_target_modules:
- "q_proj"
- "k_proj"
- "v_proj"
- "o_proj"
- "gate_proj"
- "down_proj"
- "up_proj"
# fsdp:
# enable_fsdp: True
# forward_prefetch: True
# sharding_strategy: "FULL_SHARD"
# auto_wrap_policy: "TRANSFORMER_BASED_WRAP"
# transformer_layer_cls: "LlamaDecoderLayer"
|
moscowx21/blockassist-bc-extinct_bipedal_clam_1754515725
|
moscowx21
| 2025-08-06T21:29:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"extinct bipedal clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-06T21:29:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- extinct bipedal clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
unsloth/Qwen3-4B-Thinking-2507-unsloth-bnb-4bit
|
unsloth
| 2025-08-06T21:21:36Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"conversational",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-4B-Thinking-2507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-06T21:21:17Z |
---
tags:
- unsloth
base_model:
- Qwen/Qwen3-4B-Thinking-2507
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE
pipeline_tag: text-generation
---
<div>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
</div>
# Qwen3-4B-Thinking-2507
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Highlights
Over the past three months, we have continued to scale the **thinking capability** of Qwen3-4B, improving both the **quality and depth** of reasoning. We are pleased to introduce **Qwen3-4B-Thinking-2507**, featuring the following key enhancements:
- **Significantly improved performance** on reasoning tasks, including logical reasoning, mathematics, science, coding, and academic benchmarks that typically require human expertise.
- **Markedly better general capabilities**, such as instruction following, tool usage, text generation, and alignment with human preferences.
- **Enhanced 256K long-context understanding** capabilities.
**NOTE**: This version has an increased thinking length. We strongly recommend its use in highly complex reasoning tasks.

## Model Overview
**Qwen3-4B-Thinking-2507** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 4.0B
- Number of Paramaters (Non-Embedding): 3.6B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: **262,144 natively**.
**NOTE: This model supports only thinking mode. Meanwhile, specifying `enable_thinking=True` is no longer required.**
Additionally, to enforce model thinking, the default chat template automatically includes `<think>`. Therefore, it is normal for the model's output to contain only `</think>` without an explicit opening `<think>` tag.
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Performance
| | Qwen3-30B-A3B Thinking | Qwen3-4B Thinking | Qwen3-4B-Thinking-2507 |
|--- | --- | --- | --- |
| **Knowledge** | | |
| MMLU-Pro | **78.5** | 70.4 | 74.0 |
| MMLU-Redux | **89.5** | 83.7 | 86.1 |
| GPQA | **65.8** | 55.9 | **65.8** |
| SuperGPQA | **51.8** | 42.7 | 47.8 |
| **Reasoning** | | |
| AIME25 | 70.9 | 65.6 | **81.3** |
| HMMT25 | 49.8 | 42.1 | **55.5** |
| LiveBench 20241125 | **74.3** | 63.6 | 71.8 |
| **Coding** | | |
| LiveCodeBench v6 (25.02-25.05) | **57.4** | 48.4 | 55.2 |
| CFEval | **1940** | 1671 | 1852 |
| OJBench | **20.7** | 16.1 | 17.9 |
| **Alignment** | | |
| IFEval | 86.5 | 81.9 | **87.4** |
| Arena-Hard v2$ | **36.3** | 13.7 | 34.9 |
| Creative Writing v3 | **79.1** | 61.1 | 75.6 |
| WritingBench | 77.0 | 73.5 | **83.3** |
| **Agent** | | |
| BFCL-v3 | 69.1 | 65.9 | **71.2** |
| TAU1-Retail | 61.7 | 33.9 | **66.1** |
| TAU1-Airline | 32.0 | 32.0 | **48.0** |
| TAU2-Retail | 34.2 | 38.6 | **53.5** |
| TAU2-Airline | 36.0 | 28.0 | **58.0** |
| TAU2-Telecom | 22.8 | 17.5 | **27.2** |
| **Multilingualism** | | |
| MultiIF | 72.2 | 66.3 | **77.3** |
| MMLU-ProX | **73.1** | 61.0 | 64.2 |
| INCLUDE | **71.9** | 61.8 | 64.4 |
| PolyMATH | 46.1 | 40.0 | **46.2** |
$ For reproducibility, we report the win rates evaluated by GPT-4.1.
\& For highly challenging tasks (including PolyMATH and all reasoning and coding tasks), we use an output length of 81,920 tokens. For all other tasks, we set the output length to 32,768.
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-4B-Thinking-2507"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content) # no opening <think> tag
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-4B-Thinking-2507 --context-length 262144 --reasoning-parser deepseek-r1
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-4B-Thinking-2507 --max-model-len 262144 --enable-reasoning --reasoning-parser deepseek_r1
```
**Note: If you encounter out-of-memory (OOM) issues, you may consider reducing the context length to a smaller value. However, since the model may require longer token sequences for reasoning, we strongly recommend using a context length greater than 131,072 when possible.**
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
# Using OpenAI-compatible API endpoint. It is recommended to disable the reasoning and the tool call parsing
# functionality of the deployment frameworks and let Qwen-Agent automate the related operations. For example,
# `VLLM_USE_MODELSCOPE=true vllm serve Qwen/Qwen3-4B-Thinking-2507 --served-model-name Qwen3-4B-Thinking-2507 --max-model-len 262144`.
llm_cfg = {
'model': 'Qwen3-4B-Thinking-2507',
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base without reasoning and tool call parsing
'api_key': 'EMPTY',
'generate_cfg': {
'thought_in_content': True,
},
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- We suggest using `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 81,920 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
SkyWalkertT1/crypto_bert_sentiment
|
SkyWalkertT1
| 2025-08-06T21:21:23Z | 11 | 1 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"finance",
"crypto",
"turkish",
"BTC",
"ETH",
"XRP",
"tr",
"base_model:dbmdz/bert-base-turkish-cased",
"base_model:finetune:dbmdz/bert-base-turkish-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-03T15:27:09Z |
---
library_name: transformers
tags:
- finance
- crypto
- text-classification
- bert
- turkish
- BTC
- ETH
- XRP
license: mit
base_model:
- dbmdz/bert-base-turkish-cased
pipeline_tag: text-classification
language:
- tr
---
# SkyWalkertT1/crypto_bert_sentiment
## 📌 Model Details
### Model Description
This is a BERT-based sentiment classification model fine-tuned on Turkish-language cryptocurrency-related comments. It predicts one of three sentiment classes: positive, neutral, or negative. This model was built using the Hugging Face 🤗 Transformers library and is suitable for analyzing sentiment in crypto communities, forums, or financial social media texts in Turkish.
- **Developed by:** [SkyWalkertT1 - Furkan Fatih Çiftçi]
- **Funded by:** Personal / Community Open Source
- **Shared by:** SkyWalkertT1
- **Model type:** BERT-based Sequence Classification
- **Language(s) (NLP):** Turkish
- **License:** Apache 2.0
- **Finetuned from model:** `dbmdz/bert-base-turkish-cased`
## 📚 Training Details
### Training Data
Dataset consists of labeled Turkish-language comments related to cryptocurrency, manually tagged with 3 sentiment labels.
**The dataset used for training this model is proprietary and was created and labeled by the author.**
The dataset shape is approximately `(1171, 2)` — indicating 1171 samples with 2 columns (text and label).
### Model Sources
- **Repository:** https://huggingface.co/SkyWalkertT1/my_crypto_comment_model
## 🔍 Uses
### Direct Use
- Turkish sentiment analysis on crypto/financial text
- Educational / experimental use for NLP in Turkish
### Downstream Use
- Integration into crypto sentiment bots
- Turkish language feedback systems
- Sentiment dashboards for crypto forums
### Out-of-Scope Use
- Use on non-Turkish text
- Medical, legal, or other high-risk domain sentiment prediction
## ⚠️ Bias, Risks, and Limitations
The model was trained on data specific to cryptocurrency sentiment in Turkish. It may not generalize to other domains. Model performance may vary depending on the writing style and slang usage.
### Recommendations
- Do not use this model for critical decision-making.
- Human validation should accompany any automated output.
## 🚀 How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_path = "SkyWalkertT1/my_crypto_comment_model"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
text = "Bugün piyasada büyük bir düşüş bekliyorum."
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
inputs = {k: v.to(device) for k, v in inputs.items()}
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predicted_class = torch.argmax(logits, dim=1).item()
labels = ['negative', 'neutral', 'positive']
print(f"Prediction: {labels[predicted_class]}")
```
## 📚 Training Details
### Training Data
Dataset consists of labeled Turkish-language comments related to cryptocurrency, manually tagged with 3 sentiment labels.
### Training Procedure
Model was fine-tuned using Hugging Face's `Trainer` API.
#### Training Hyperparameters
- Epochs: 4
- Batch size: 16
- Optimizer: AdamW
- Learning rate: 2e-5
- Precision: fp32
## 📈 Evaluation
### Testing Data, Factors & Metrics
Model evaluated on a 20% validation split from the same dataset.
#### Metrics
- Accuracy
- F1-score (macro average)
### Results
- Accuracy: ~85%
- F1-macro: ~84%
## 🌍 Environmental Impact
Carbon emissions are minimal due to fine-tuning only (~4 hours on a single NVIDIA T4 GPU).
- **Hardware Type:** NVIDIA T4 (Google Colab)
- **Hours used:** ~4
- **Cloud Provider:** Google Colab
- **Carbon Emitted:** Approx. ~1 kg CO2eq
## 🧠 Technical Specifications
### Model Architecture and Objective
BERT transformer architecture with a classification head on top for sequence classification into 3 sentiment classes.
### Compute Infrastructure
- Google Colab
- PyTorch + Transformers
## 📣 Citation
**BibTeX:**
```bibtex
@misc{SkyWalkertT1_crypto_bert,
author = {Furkan Fatih Çiftçi},
title = {Turkish Crypto Sentiment Model},
year = {2025.08.03},
howpublished = {\url{https://huggingface.co/SkyWalkertT1/my_crypto_comment_model}},
}
```
## 📬 Contact
For feedback or collaboration:
- Email: [email protected]
|
KotshinZ/Qwen3-32B-DNA
|
KotshinZ
| 2025-08-06T21:18:49Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"open-r1",
"trl",
"conversational",
"dataset:openai/gsm8k",
"base_model:Qwen/Qwen3-32B",
"base_model:finetune:Qwen/Qwen3-32B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T20:47:04Z |
---
base_model: Qwen/Qwen3-32B
datasets: openai/gsm8k
library_name: transformers
model_name: Qwen3-32B-DNA
tags:
- generated_from_trainer
- sft
- open-r1
- trl
licence: license
---
# Model Card for Qwen3-32B-DNA
This model is a fine-tuned version of [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) on the [openai/gsm8k](https://huggingface.co/datasets/openai/gsm8k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="KotshinZ/Qwen3-32B-DNA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/neko-llm/huggingface/runs/jajnce2p)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.54.1
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
giovannidemuri/llama8b-er-afg-v62-seed2-hx
|
giovannidemuri
| 2025-08-06T21:12:32Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T19:17:30Z |
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B
tags:
- generated_from_trainer
model-index:
- name: llama8b-er-afg-v62-seed2-hx
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama8b-er-afg-v62-seed2-hx
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.2
|
bapi2025/blockassist-bc-lanky_silky_duck_1754512557
|
bapi2025
| 2025-08-06T21:02:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lanky silky duck",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-06T21:01:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lanky silky duck
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.