modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
VIDEOS-18-durefishan-Viral-Video/FULL.VIDEO.Dure.Fishan.Viral.Video.Tutorial.Official | VIDEOS-18-durefishan-Viral-Video | 2025-06-16T12:23:41Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T12:23:19Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
dllmpg/ppo-0.3 | dllmpg | 2025-06-16T12:22:27Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-16T12:21:55Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -183.29 +/- 55.39
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'notebook'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.3
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'dllmpg/ppo-0.3'
'batch_size': 512
'minibatch_size': 128}
```
|
LakshGupta/Taxi-v3 | LakshGupta | 2025-06-16T12:22:04Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-16T12:22:02Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="LakshGupta/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
TV-Shruthi-Narayanan-Viral-Videos/FULL.VIDEO.Shruthi.Narayanan.Viral.Video.Tutorial.Official | TV-Shruthi-Narayanan-Viral-Videos | 2025-06-16T12:21:42Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T12:21:34Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
Trupthi/cnn_news_summary_model_trained_on_reduced_data | Trupthi | 2025-06-16T12:21:08Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-16T10:24:00Z | ---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: cnn_news_summary_model_trained_on_reduced_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cnn_news_summary_model_trained_on_reduced_data
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:|
| No log | 1.0 | 144 | 1.9070 | 0.2375 | 0.0951 | 0.194 | 0.194 | 20.0 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mlx-community/Falcon-H1-0.5B-Instruct-8bit | mlx-community | 2025-06-16T12:17:57Z | 2 | 0 | mlx | [
"mlx",
"safetensors",
"falcon_h1",
"falcon-h1",
"text-generation",
"conversational",
"base_model:tiiuae/Falcon-H1-0.5B-Instruct",
"base_model:quantized:tiiuae/Falcon-H1-0.5B-Instruct",
"license:other",
"8-bit",
"region:us"
] | text-generation | 2025-06-14T15:05:03Z | ---
library_name: mlx
tags:
- falcon-h1
- mlx
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
base_model: tiiuae/Falcon-H1-0.5B-Instruct
inference: true
pipeline_tag: text-generation
---
# mlx-community/Falcon-H1-0.5B-Instruct-8bit
This model [mlx-community/Falcon-H1-0.5B-Instruct-8bit](https://huggingface.co/mlx-community/Falcon-H1-0.5B-Instruct-8bit) was
converted to MLX format from [tiiuae/Falcon-H1-0.5B-Instruct](https://huggingface.co/tiiuae/Falcon-H1-0.5B-Instruct)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Falcon-H1-0.5B-Instruct-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
aieng-lab/ModernBERT-base_story-points | aieng-lab | 2025-06-16T12:17:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"en",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T12:17:28Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- answerdotai/ModernBERT-base
pipeline_tag: text-classification
---
# ModernBERT base for estimating story points
This model estimates story points as a numerical value.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
mlx-community/Falcon-H1-0.5B-Instruct-5bit | mlx-community | 2025-06-16T12:17:32Z | 2 | 0 | mlx | [
"mlx",
"safetensors",
"falcon_h1",
"falcon-h1",
"text-generation",
"conversational",
"base_model:tiiuae/Falcon-H1-0.5B-Instruct",
"base_model:quantized:tiiuae/Falcon-H1-0.5B-Instruct",
"license:other",
"5-bit",
"region:us"
] | text-generation | 2025-06-14T14:58:54Z | ---
library_name: mlx
tags:
- falcon-h1
- mlx
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
base_model: tiiuae/Falcon-H1-0.5B-Instruct
inference: true
pipeline_tag: text-generation
---
# mlx-community/Falcon-H1-0.5B-Instruct-5bit
This model [mlx-community/Falcon-H1-0.5B-Instruct-5bit](https://huggingface.co/mlx-community/Falcon-H1-0.5B-Instruct-5bit) was
converted to MLX format from [tiiuae/Falcon-H1-0.5B-Instruct](https://huggingface.co/tiiuae/Falcon-H1-0.5B-Instruct)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Falcon-H1-0.5B-Instruct-5bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mlx-community/Falcon-H1-0.5B-Instruct-4bit | mlx-community | 2025-06-16T12:16:57Z | 6 | 0 | mlx | [
"mlx",
"safetensors",
"falcon_h1",
"falcon-h1",
"text-generation",
"conversational",
"base_model:tiiuae/Falcon-H1-0.5B-Instruct",
"base_model:quantized:tiiuae/Falcon-H1-0.5B-Instruct",
"license:other",
"4-bit",
"region:us"
] | text-generation | 2025-06-14T14:55:24Z | ---
library_name: mlx
tags:
- falcon-h1
- mlx
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
base_model: tiiuae/Falcon-H1-0.5B-Instruct
inference: true
pipeline_tag: text-generation
---
# mlx-community/Falcon-H1-0.5B-Instruct-4bit
This model [mlx-community/Falcon-H1-0.5B-Instruct-4bit](https://huggingface.co/mlx-community/Falcon-H1-0.5B-Instruct-4bit) was
converted to MLX format from [tiiuae/Falcon-H1-0.5B-Instruct](https://huggingface.co/tiiuae/Falcon-H1-0.5B-Instruct)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Falcon-H1-0.5B-Instruct-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
aieng-lab/roberta-large_story-points | aieng-lab | 2025-06-16T12:16:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"en",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T12:16:33Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- roberta-large
pipeline_tag: text-classification
---
# RoBERTa large for estimating story points
This model estimates story points as a numerical value.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [roberta-large](https://huggingface.co/roberta-large)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
LakshGupta/q-FrozenLake-v1-4x4-noSlippery | LakshGupta | 2025-06-16T12:16:44Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-16T12:16:41Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="LakshGupta/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dllmpg/ppo-0.2 | dllmpg | 2025-06-16T12:14:50Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-16T11:55:44Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -221.45 +/- 187.38
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'notebook'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'dllmpg/ppo-0.2'
'batch_size': 512
'minibatch_size': 128}
```
|
TekbotRobotics/Cotonou_VLA_flags_sorting | TekbotRobotics | 2025-06-16T12:14:19Z | 0 | 1 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"dataset:TekbotRobotics/svla_so101_pickplace_flags_sorting",
"region:us"
] | robotics | 2025-06-15T09:18:07Z | ---
pipeline_tag: robotics
tags:
- lerobot
library_name: lerobot
datasets:
- TekbotRobotics/svla_so101_pickplace_flags_sorting
---
## SmolVLA: A vision-language-action model for affordable and efficient robotics
Resources and technical documentation:
[Train using Google Colab Notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/lerobot/training-smolvla.ipynb#scrollTo=ZO52lcQtxseE)
[SmolVLA HF Documentation](https://huggingface.co/docs/lerobot/smolvla)
Designed by Tekbot Robotics and Inspired from Hugging Face.
This model was finetuned on [hugging Face base model](https://huggingface.co/lerobot/smolvla_base/).
Before proceeding to the next steps, you need to properly install the environment by following [Installation Guide](https://huggingface.co/docs/lerobot/installation) on the docs.
Install smolvla extra dependencies:
```bash
pip install -e ".[smolvla]"
```
Example of finetuning the smolvla pretrained model (`smolvla_base`):
```bash
python lerobot/scripts/train.py \
--policy.path=lerobot/smolvla_base \
--dataset.repo_id=TekbotRobotics/svla_so101_pickplace_flags_sorting \
--batch_size=8 \
--steps=2000 \
--output_dir=outputs/train/my_smolvla \
--job_name=my_smolvla_training \
--policy.device=cuda \
--wandb.enable=true
```
|
chamizzu/emotion_classifier_roberta | chamizzu | 2025-06-16T12:14:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T12:13:51Z | ---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: emotion_classifier_roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classifier_roberta
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2388
- Macro F1: 0.3920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1963 | 1.0 | 5227 | 0.1933 | 0.3603 |
| 0.1866 | 2.0 | 10454 | 0.1956 | 0.3795 |
| 0.1752 | 3.0 | 15681 | 0.1956 | 0.4092 |
| 0.1696 | 4.0 | 20908 | 0.2009 | 0.4037 |
| 0.1575 | 5.0 | 26135 | 0.2076 | 0.4142 |
| 0.1492 | 6.0 | 31362 | 0.2175 | 0.4077 |
| 0.139 | 7.0 | 36589 | 0.2297 | 0.3932 |
| 0.1274 | 8.0 | 41816 | 0.2388 | 0.3920 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
JemmieX/wqewqewq | JemmieX | 2025-06-16T12:11:38Z | 0 | 0 | null | [
"fill-mask",
"af",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | fill-mask | 2025-06-09T06:48:27Z | ---
license: bigscience-bloom-rail-1.0
language:
- af
new_version: ResembleAI/chatterbox
pipeline_tag: fill-mask
---
ๅคงๆๅคง่ๅๅคงๅฎถ็ฑ้ๅฐๅฎถๅๆๅปๆ็ป่ฎฐ
##้ไธๅๆๅปๆๅบๅฐ้ฟ้ๅๆๅป้ฟๆฏ็ป่ฎฐ้ฟๆฏ่พพ
#ไน
ๅๅๅคงๅฎถๆๆบๅฒๆถ้ดๆๆญป็ป่ฎฐ |
Shubham-gupta-Hs/wATCH.Shubham.gupta.viral.video.original | Shubham-gupta-Hs | 2025-06-16T12:10:09Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T12:09:59Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
Triangle104/Huihui-MoE-24B-A8B-abliterated-Q8_0-GGUF | Triangle104 | 2025-06-16T12:09:48Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T11:58:22Z | ---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-24B-A8B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
โ**Risk of Sensitive or Controversial Outputs**โ: This modelโs safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
โ**Not Suitable for All Audiences**:โ Due to limited content filtering, the modelโs
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
โ**Legal and Ethical Responsibilities**โ: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
โ**Research and Experimental Use**โ: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
โ**Monitoring and Review Recommendations**โ: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
โ**No Default Safety Guarantees**โ: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Huihui-MoE-24B-A8B-abliterated-Q8_0-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-24B-A8B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) for more details on the model.
---
Huihui-MoE-24B-A8B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Qwen3-8B-abliterated base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 4 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
This model combines four ablated models, and perhaps it can achieve the performance of all the ablated models?
This is just a test. The exploration of merging different manifestations of models of the same type is another possibility.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q8_0-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q8_0-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q8_0-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q8_0-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q8_0.gguf -c 2048
```
|
Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0616-Fork | Alvin-LiuJia | 2025-06-16T12:09:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T12:07:28Z | ---
base_model: unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Alvin-LiuJia
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Shubham-Gupta-Official-Viral-Videos/FULL.VIDEO.Shubham.Gupta.Viral.Video.Tutorial.Official | Shubham-Gupta-Official-Viral-Videos | 2025-06-16T12:09:27Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T12:09:15Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
mradermacher/RZN-T-GGUF | mradermacher | 2025-06-16T12:08:49Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:phronetic-ai/RZN-T",
"base_model:quantized:phronetic-ai/RZN-T",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T12:05:14Z | ---
base_model: phronetic-ai/RZN-T
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/phronetic-ai/RZN-T
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RZN-T-GGUF/resolve/main/RZN-T.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/RZN-T-GGUF/resolve/main/RZN-T.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/RZN-T-GGUF/resolve/main/RZN-T.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RZN-T-GGUF/resolve/main/RZN-T.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/RZN-T-GGUF/resolve/main/RZN-T.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/RZN-T-GGUF/resolve/main/RZN-T.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RZN-T-GGUF/resolve/main/RZN-T.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RZN-T-GGUF/resolve/main/RZN-T.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/RZN-T-GGUF/resolve/main/RZN-T.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/RZN-T-GGUF/resolve/main/RZN-T.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/RZN-T-GGUF/resolve/main/RZN-T.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/RZN-T-GGUF/resolve/main/RZN-T.f16.gguf) | f16 | 1.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
23ikram/llama3bmerged | 23ikram | 2025-06-16T12:08:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T11:56:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/zindi-clinical-reasoning-t5-base-GGUF | mradermacher | 2025-06-16T12:05:26Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:Skier8402/zindi-clinical-reasoning-t5-base",
"base_model:quantized:Skier8402/zindi-clinical-reasoning-t5-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T12:03:31Z | ---
base_model: Skier8402/zindi-clinical-reasoning-t5-base
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Skier8402/zindi-clinical-reasoning-t5-base
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/zindi-clinical-reasoning-t5-base-GGUF/resolve/main/zindi-clinical-reasoning-t5-base.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/zindi-clinical-reasoning-t5-base-GGUF/resolve/main/zindi-clinical-reasoning-t5-base.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/zindi-clinical-reasoning-t5-base-GGUF/resolve/main/zindi-clinical-reasoning-t5-base.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/zindi-clinical-reasoning-t5-base-GGUF/resolve/main/zindi-clinical-reasoning-t5-base.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/zindi-clinical-reasoning-t5-base-GGUF/resolve/main/zindi-clinical-reasoning-t5-base.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/zindi-clinical-reasoning-t5-base-GGUF/resolve/main/zindi-clinical-reasoning-t5-base.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zindi-clinical-reasoning-t5-base-GGUF/resolve/main/zindi-clinical-reasoning-t5-base.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zindi-clinical-reasoning-t5-base-GGUF/resolve/main/zindi-clinical-reasoning-t5-base.Q5_K_S.gguf) | Q5_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/zindi-clinical-reasoning-t5-base-GGUF/resolve/main/zindi-clinical-reasoning-t5-base.Q5_K_M.gguf) | Q5_K_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/zindi-clinical-reasoning-t5-base-GGUF/resolve/main/zindi-clinical-reasoning-t5-base.Q6_K.gguf) | Q6_K | 0.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/zindi-clinical-reasoning-t5-base-GGUF/resolve/main/zindi-clinical-reasoning-t5-base.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/zindi-clinical-reasoning-t5-base-GGUF/resolve/main/zindi-clinical-reasoning-t5-base.f16.gguf) | f16 | 0.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
VIRAL-Video-two-wolf-one-girl-Video/FULL.VIDEO.two.wolf.one.girl.Viral.Video.Tutorial.Official | VIRAL-Video-two-wolf-one-girl-Video | 2025-06-16T12:04:02Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T12:03:56Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
ahamedddd/showBranchesNLPDistilBertBaseUncased | ahamedddd | 2025-06-16T12:03:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T12:02:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jaeyong2/Qwen3-0.6B-DPO-Ja-Peft | jaeyong2 | 2025-06-16T12:03:11Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-14T13:01:59Z | ---
library_name: transformers
license: apache-2.0
language:
- ja
---
## Evaluation
```
!lm_eval --model hf \
--model_args pretrained=jaeyong2/Qwen3-0.6B-DPO-Ja-Peft \
--tasks mmlu,japanese_leaderboard,gsm8k \
--device cuda:0 \
--batch_size 1 \
--num_fewshot 5
```
| | Qwen3-0.6B-DPO-ja | Qwen3-0.6B |
|:------------------------------|----------------------:|----------------------:|
| MMLU | 0.41 | 0.40 |
| ja_leaderboard_jaqket_v2 | 0.30 | 0.28 |
| ja_leaderboard_jcommonsenseqa | 0.45 | 0.44 |
| ja_leaderboard_jnli | 0.24 | 0.26 |
| ja_leaderboard_jsquad | 0.49 | 0.48 |
| ja_leaderboard_marc_ja | 0.87 | 0.86 |
| ja_leaderboard_mgsm | 0.12 | 0.11 |
| ja_leaderboard_xlsum | 0.09 | 0.08 |
| ja_leaderboard_xwinograd | 0.55 | 0.55 |
| GSM8K | 0.44 | 0.42 |
## License
- Qwen/Qwen3-0.6B : https://choosealicense.com/licenses/apache-2.0/
## Acknowledgement
This research is supported by **TPU Research Cloud program**. |
TareksLab/Tesseract-DT-LLaMa-70B | TareksLab | 2025-06-16T12:02:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:TareksLab/Fifth-Dimension-V1-LLaMa-70B",
"base_model:merge:TareksLab/Fifth-Dimension-V1-LLaMa-70B",
"base_model:TareksLab/First-Dimension-V1-LLaMa-70B",
"base_model:merge:TareksLab/First-Dimension-V1-LLaMa-70B",
"base_model:TareksLab/Second-Dimension-V1-LLaMa-70B",
"base_model:merge:TareksLab/Second-Dimension-V1-LLaMa-70B",
"base_model:TareksLab/Sixth-Dimension-V1-LLaMa-70B",
"base_model:merge:TareksLab/Sixth-Dimension-V1-LLaMa-70B",
"base_model:TareksLab/Third-Dimension-V1-LLaMa-70B",
"base_model:merge:TareksLab/Third-Dimension-V1-LLaMa-70B",
"base_model:TareksLab/Zero-Dimension-F-LLaMa-70B",
"base_model:merge:TareksLab/Zero-Dimension-F-LLaMa-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T11:44:02Z | ---
base_model:
- TareksLab/Third-Dimension-V1-LLaMa-70B
- TareksLab/Sixth-Dimension-V1-LLaMa-70B
- TareksLab/First-Dimension-V1-LLaMa-70B
- TareksLab/Zero-Dimension-F-LLaMa-70B
- TareksLab/Second-Dimension-V1-LLaMa-70B
- TareksLab/Fifth-Dimension-V1-LLaMa-70B
library_name: transformers
tags:
- mergekit
- merge
---
# MERGE3
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [TareksLab/Second-Dimension-V1-LLaMa-70B](https://huggingface.co/TareksLab/Second-Dimension-V1-LLaMa-70B) as a base.
### Models Merged
The following models were included in the merge:
* [TareksLab/Third-Dimension-V1-LLaMa-70B](https://huggingface.co/TareksLab/Third-Dimension-V1-LLaMa-70B)
* [TareksLab/Sixth-Dimension-V1-LLaMa-70B](https://huggingface.co/TareksLab/Sixth-Dimension-V1-LLaMa-70B)
* [TareksLab/First-Dimension-V1-LLaMa-70B](https://huggingface.co/TareksLab/First-Dimension-V1-LLaMa-70B)
* [TareksLab/Zero-Dimension-F-LLaMa-70B](https://huggingface.co/TareksLab/Zero-Dimension-F-LLaMa-70B)
* [TareksLab/Fifth-Dimension-V1-LLaMa-70B](https://huggingface.co/TareksLab/Fifth-Dimension-V1-LLaMa-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TareksLab/Sixth-Dimension-V1-LLaMa-70B
parameters:
weight: [0.1, 0.1, 0.1, 0.2, 0.5]
density: 0.5
- model: TareksLab/Fifth-Dimension-V1-LLaMa-70B
parameters:
weight: [0.1, 0.1, 0.2, 0.4, 0.2]
density: 0.5
- model: TareksLab/Zero-Dimension-F-LLaMa-70B
parameters:
weight: [0.1, 0.2, 0.4, 0.2, 0.1]
density: 0.5
- model: TareksLab/Third-Dimension-V1-LLaMa-70B
parameters:
weight: [0.2, 0.4, 0.2, 0.1, 0.1]
density: 0.5
- model: TareksLab/First-Dimension-V1-LLaMa-70B
parameters:
weight: [0.5, 0.2, 0.1, 0.1, 0.1]
density: 0.5
merge_method: dare_ties
base_model: TareksLab/Second-Dimension-V1-LLaMa-70B
parameters:
normalize: false
dtype: float32
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: TareksLab/Zero-Dimension-F-LLaMa-70B
pad_to_multiple_of: 8
```
|
New-tutorial-College-Girls-viral-video/FULL.VIDEO.College.Girls.Viral.Video.Tutorial.Official | New-tutorial-College-Girls-viral-video | 2025-06-16T12:01:55Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T12:01:48Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.05_0.75_epoch1 | MinaMila | 2025-06-16T12:01:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T11:59:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
robinn6/deepseek-vl-7b-coco-refg-lora-upgraded-bbox | robinn6 | 2025-06-16T12:01:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision",
"referring-expression-grounding",
"object-detection",
"coco-refg",
"deepseek",
"lora",
"peft",
"en",
"dataset:lmms-lab/RefCOCOg",
"base_model:deepseek-ai/deepseek-vl-7b-chat",
"base_model:adapter:deepseek-ai/deepseek-vl-7b-chat",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-06-16T12:01:00Z | ---
license: apache-2.0
base_model: deepseek-ai/deepseek-vl-7b-chat
tags:
- vision
- referring-expression-grounding
- object-detection
- coco-refg
- deepseek
- lora
- peft
datasets:
- lmms-lab/RefCOCOg
language:
- en
library_name: transformers
pipeline_tag: object-detection
metrics:
- mean_iou
- accuracy
model-index:
- name: deepseek-vl-7b-coco-refg-lora-upgraded-bbox
results:
- task:
type: referring-expression-grounding
name: Referring Expression Grounding
dataset:
type: RefCOCOg
name: COCO-RefG (RefCOCOg)
metrics:
- type: mean_iou
value: 0.2567
name: Mean IoU
- type: accuracy
value: 12.5%
name: Accuracy @ IoU 0.5
---
# deepseek-vl-7b-coco-refg-lora-upgraded-bbox
This model is a fine-tuned version of [deepseek-ai/deepseek-vl-7b-chat](https://huggingface.co/deepseek-ai/deepseek-vl-7b-chat) for referring expression grounding on COCO-RefG dataset.
## Model Description
- **Task**: Referring Expression Grounding (REG) - locating objects in images based on natural language descriptions
- **Base Model**: DeepSeek-VL 7B Chat
- **Training Method**: LoRA (Vision + Language) + Custom BBox Head with Position Embeddings
- **BBox Head Architecture**: EnhancedBoundingBoxHead
- **Dataset**: COCO-RefG (RefCOCOg)
- **Training Samples**: 7573
- **Validation Samples**: 5023
- **Output**: Bounding box coordinates [x, y, width, height] in normalized format [0,1]
## Performance
| Metric | Value |
|--------|-------|
| Mean IoU | 0.2567 |
| Accuracy @ IoU 0.5 | 12.5% |
| Accuracy @ IoU 0.75 | 2.3% |
| Evaluated Samples | 5023 |
## Usage
```python
import torch
from transformers import AutoModelForCausalLM
from deepseek_vl.models import VLChatProcessor
from deepseek_vl.utils.io import load_pil_images
from PIL import Image
import requests
# Load the model
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = VLChatProcessor.from_pretrained("deepseek-ai/deepseek-vl-7b-chat")
model = AutoModelForCausalLM.from_pretrained(
"deepseek-ai/deepseek-vl-7b-chat",
torch_dtype=torch.float16,
trust_remote_code=True,
device_map="auto"
)
# Load LoRA weights if available
if lora_applied:
from peft import PeftModel
model = PeftModel.from_pretrained(model, "robinn6/deepseek-vl-7b-coco-refg-lora-upgraded-bbox/lora_weights")
# Load the bbox head
from bbox_head_model import LightweightBBoxHead
bbox_head = LightweightBBoxHead(model.language_model.config.hidden_size, mid_dim=512)
checkpoint = torch.load("robinn6/deepseek-vl-7b-coco-refg-lora-upgraded-bbox/bbox_head/pytorch_model.bin", map_location=device)
bbox_head.load_state_dict(checkpoint['model_state_dict'])
bbox_head.to(device)
bbox_head.eval()
# Prepare input
image_path = "path/to/your/image.jpg"
referring_expression = "the red car on the left"
conversation = [
{
"role": "User",
"content": f"<image_placeholder>Where is {referring_expression}?",
"images": [image_path],
},
{"role": "Assistant", "content": ""},
]
# Process and predict
pil_images = load_pil_images(conversation)
inputs = processor(conversations=conversation, images=pil_images, force_batchify=True)
inputs = {k: v.to(device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
with torch.no_grad():
embeds = model.prepare_inputs_embeds(**inputs)
outputs = model.language_model.model(
inputs_embeds=embeds,
attention_mask=inputs.get("attention_mask"),
output_hidden_states=True
)
hidden_states = outputs.hidden_states[-1]
last_token_hidden = hidden_states[:, -1, :]
bbox_pred = bbox_head(last_token_hidden)
# bbox_pred contains [x, y, width, height] in normalized coordinates
x, y, w, h = bbox_pred[0].cpu().numpy()
print(f"Bounding box: x={x:.3f}, y={y:.3f}, w={w:.3f}, h={h:.3f}")
```
## Training Details
- **Learning Rate**: 1e-4
- **Optimizer**: AdamW
- **Scheduler**: MultiStepLR (milestones: [200, 400, 600], gamma: 0.5)
- **Loss Function**: GIoU Loss (ฮฑ=3.0) + L1 Loss (ฮฒ=0.3)
- **Epochs**: 3
- **Batch Size**: 1
- **Training Time**: 239.8
## Limitations
- The model is trained on COCO-RefG dataset and may not generalize well to other domains
- Performance may vary depending on the complexity of referring expressions
- Bounding box predictions are in normalized coordinates [0,1]
## Citation
If you use this model, please cite:
```bibtex
@misc{deepseek_vl_7b_coco_refg_lora_upgraded_bbox_2024,
title={deepseek-vl-7b-coco-refg-lora-upgraded-bbox: DeepSeek-VL Fine-tuned for Referring Expression Grounding},
author={{HF_USERNAME}},
year={2024},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/robinn6/deepseek-vl-7b-coco-refg-lora-upgraded-bbox}}
}
```
|
mradermacher/Huihui-MoE-1.2B-A0.6B-GGUF | mradermacher | 2025-06-16T12:00:16Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"en",
"base_model:huihui-ai/Huihui-MoE-1.2B-A0.6B",
"base_model:quantized:huihui-ai/Huihui-MoE-1.2B-A0.6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T11:52:38Z | ---
base_model: huihui-ai/Huihui-MoE-1.2B-A0.6B
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
quantized_by: mradermacher
tags:
- moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/huihui-ai/Huihui-MoE-1.2B-A0.6B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Huihui-MoE-1.2B-A0.6B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-1.2B-A0.6B-GGUF/resolve/main/Huihui-MoE-1.2B-A0.6B.Q2_K.gguf) | Q2_K | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-1.2B-A0.6B-GGUF/resolve/main/Huihui-MoE-1.2B-A0.6B.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-1.2B-A0.6B-GGUF/resolve/main/Huihui-MoE-1.2B-A0.6B.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-1.2B-A0.6B-GGUF/resolve/main/Huihui-MoE-1.2B-A0.6B.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-1.2B-A0.6B-GGUF/resolve/main/Huihui-MoE-1.2B-A0.6B.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-1.2B-A0.6B-GGUF/resolve/main/Huihui-MoE-1.2B-A0.6B.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-1.2B-A0.6B-GGUF/resolve/main/Huihui-MoE-1.2B-A0.6B.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-1.2B-A0.6B-GGUF/resolve/main/Huihui-MoE-1.2B-A0.6B.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-1.2B-A0.6B-GGUF/resolve/main/Huihui-MoE-1.2B-A0.6B.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-1.2B-A0.6B-GGUF/resolve/main/Huihui-MoE-1.2B-A0.6B.Q6_K.gguf) | Q6_K | 1.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-1.2B-A0.6B-GGUF/resolve/main/Huihui-MoE-1.2B-A0.6B.Q8_0.gguf) | Q8_0 | 1.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-1.2B-A0.6B-GGUF/resolve/main/Huihui-MoE-1.2B-A0.6B.f16.gguf) | f16 | 2.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
theSOL1/joseon-documents-translator | theSOL1 | 2025-06-16T11:58:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"region:us"
] | null | 2025-06-16T11:48:29Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: peft
---
# theSOL1/joseon-documents-translator
## Model Details
### Model Description
- **Developed by:** [Subin Park](https://github.com/SOL1archive), [pggggggggh](https://github.com/pggggggggh)
- **Model type:** CLM
- **Language(s) (NLP):** Ancient/Modern Korean
- **License:** MIT License
- **Finetuned from model [optional]:** [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [GitHub Link](https://github.com/SOL1archive/ancient-document-LLM)
|
YeonjiEllen/Womens-health-AI | YeonjiEllen | 2025-06-16T11:57:26Z | 0 | 1 | null | [
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"license:mit",
"region:us"
] | null | 2025-06-16T11:56:16Z | ---
license: mit
base_model:
- google/medgemma-4b-it
---
### Role
You are a knowledgeable and compassionate AI assistant specializing in women's health, focusing on gynecological symptoms and conditions.
Your purpose is to help users better understand their symptoms, explain relevant medical terms clearly, and guide them toward appropriate care, all based on established medical information.
### Objectives
- When a user describes symptoms (e.g., pelvic pain, irregular menstruation, abnormal discharge), provide general, medically accurate information about possible related conditions.
- Explain medical vocabulary and common gynecological terms in simple language that is easy to understand.
- Encourage users to seek professional medical evaluation for accurate diagnosis and treatment.
- Help users recognize when symptoms could indicate a serious condition needing prompt medical attention.
### Tone and Style
- Use warm, empathetic, and supportive language.
- Avoid causing unnecessary alarm, while emphasizing the importance of proper medical care.
- Use clear, non-technical language suitable for people without medical training.
- Always be responsible and honest about the limits of the AI โ it does not replace a doctor.
### Medical Accuracy and Safety
- Base all responses on established, peer-reviewed medical knowledge and guidelines.
- Do not guess or fabricate information. If uncertain, acknowledge the limitation.
- Avoid direct diagnoses โ always remind users that only a healthcare professional can diagnose and treat.
### Example Interactions
**User:** I have severe menstrual cramps that get worse every month.
**Bot:** Severe menstrual cramps can sometimes be caused by conditions such as endometriosis or uterine fibroids, but many women also experience painful periods without an underlying disease.
If your pain is worsening or interfering with daily life, I recommend consulting a gynecologist who can provide a proper diagnosis and treatment options.
**User:** What does โendometriosisโ mean?
**Bot:** Endometriosis is a condition where tissue similar to the lining inside the uterus grows outside of it, causing pain and sometimes fertility issues.
It can lead to painful periods and pelvic pain. A doctor can diagnose and help manage this condition.
### Rules
- Never provide a medical diagnosis or treatment advice.
- Always include a disclaimer that the chatbot is for informational purposes only.
- If a question is vague or unclear, ask the user for more details.
- Encourage seeing a healthcare professional for any concerning symptoms.
### Disclaimer
I am an AI language model designed to provide educational information about women's health.
I do not provide medical diagnoses or replace professional medical advice.
If you have health concerns, please consult a qualified healthcare provider.
|
fikriau/praktikum-ai-modul6-robertalargefocal | fikriau | 2025-06-16T11:56:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"multi-label-classification",
"emotion-classification",
"indonesian",
"en",
"dataset:emotion",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-15T05:35:19Z | ---
library_name: transformers
tags:
- text-classification
- multi-label-classification
- emotion-classification
- roberta
- indonesian
language: en
datasets:
- emotion
---
# Model Klasifikasi Emosi Multi-Label (Praktikum Modul 6)
Model ini adalah `roberta-large` yang telah di-fine-tune untuk tugas klasifikasi emosi multi-label pada dataset teks Bahasa Inggris. Model ini mampu mendeteksi satu atau lebih dari 14 emosi yang berbeda dari sebuah teks, dan dikembangkan sebagai bagian dari pengerjaan Praktikum Modul 6 - Mata Kuliah Kecerdasan Buatan dan Pembelajaran Mesin.
## Detail Model
### Deskripsi Model
Tujuan utama proyek ini adalah untuk membangun sebuah model yang akurat dalam mengklasifikasikan emosi secara multi-label, di mana sebuah teks bisa memiliki beberapa emosi sekaligus (contoh: 'joy' dan 'excitement'). Model ini dilatih dan dievaluasi berdasarkan kompetisi Kaggle dengan metrik evaluasi utama **Macro F1-Score**.
Untuk mencapai performa terbaik dengan keterbatasan hardware (GPU 16 GB), berbagai teknik optimasi canggih diterapkan selama proses training.
- **Developed by:** [fikriau](https://huggingface.co/fikriau)
- **Model type:** `roberta`
- **Language(s) (NLP):** English (`en`)
- **License:** `apache-2.0`
- **Finetuned from model:** `roberta-large`
### Model Sources
- **Repository:** [https://huggingface.co/fikriau/praktikum-ai-modul6-robertalargefocal](https://huggingface.co/fikriau/praktikum-ai-modul6-robertalargefocal)
## Uses
### Direct Use
Model ini dapat digunakan secara langsung untuk prediksi emosi menggunakan `pipeline` dari ๐ค Transformers.
```python
from transformers import pipeline
pipe = pipeline("text-classification", model="fikriau/praktikum-ai-modul6-robertalargefocal", top_k=None)
text = "I can't believe I won the lottery, I'm so happy and excited!"
# Hasil prediksi akan memberikan skor untuk semua 14 label
results = pipe(text)
print(results)
# Contoh Output:
# [[{'label': 'excitement', 'score': 0.98}, {'label': 'joy', 'score': 0.97}, ...]]
```
## Bias, Risks, and Limitations
Model ini dilatih pada dataset yang disediakan untuk praktikum. Bias yang ada di dalam data tersebut kemungkinan besar juga akan tercermin pada prediksi model. Performa model mungkin akan berbeda pada jenis teks yang sangat berbeda dari data latihnya (misalnya, teks formal, teks teknis, atau dialek yang unik). Pengguna disarankan untuk menyadari potensi bias ini saat menggunakan model.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "fikriau/praktikum-ai-modul6-robertolargefocal"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
text = "This is the best day of my life, I feel so much gratitude!"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# Dapatkan probabilitas dengan sigmoid dan tentukan label dengan threshold 0.5
probabilities = torch.sigmoid(logits)
predictions = (probabilities > 0.5).int()
# Tampilkan hasil prediksi untuk setiap emosi
for i, label in enumerate(model.config.id2label.items()):
if predictions[0][i].item() == 1:
print(f"Detected Emotion: {label[1]} (Score: {probabilities[0][i]:.4f})")
```
## Training Details
### Training Data
Model ini di-fine-tune pada dataset **Emotion Classification** yang disediakan untuk kompetisi Kaggle pada Praktikum Modul 6. Dataset ini terdiri dari 46,455 sampel teks berbahasa Inggris, masing-masing dapat memiliki satu atau lebih dari 14 label emosi.
Analisis data awal (EDA) menunjukkan adanya **ketidakseimbangan kelas (class imbalance)** yang signifikan, di mana beberapa emosi seperti `annoyance` jauh lebih sering muncul daripada emosi lain seperti `embarrassment` dan `fear`. Temuan ini menjadi justifikasi utama untuk menggunakan `FocalLoss` sebagai fungsi kerugian (loss function) saat training.
Data dibagi menjadi 90% untuk set training dan 10% untuk set validasi.
### Training Procedure
Proses training dirancang untuk memaksimalkan performa dengan mengatasi tantangan ketidakseimbangan kelas dan keterbatasan memori GPU (VRAM 16 GB).
#### Preprocessing
1. **Text Cleaning:** Teks input dibersihkan terlebih dahulu untuk menghapus artefak digital seperti URL dan `@username`. Emoji juga diubah menjadi representasi teksnya (misalnya, `๐` -> `:face_with_tears_of_joy:`) untuk mempertahankan sinyal emosionalnya.
2. **Tokenization:** Teks yang sudah bersih di-tokenisasi menggunakan `RobertaTokenizer` yang sesuai dengan model `roberta-large`.
3. **Padding & Truncation:** Semua sekuens teks dipotong atau diberi padding hingga panjang seragam `MAX_LENGTH = 128`. Panjang ini dipilih sebagai kompromi terbaik antara menangkap konteks teks dan memastikan model `roberta-large` muat di dalam memori GPU.
#### Training Hyperparameters
Konfigurasi training menggunakan serangkaian teknik canggih untuk mencapai performa optimal:
- **Loss Function:** `FocalLoss` (dengan parameter default `alpha=0.25`, `gamma=2.0`) digunakan untuk memaksa model lebih fokus pada pembelajaran sampel dari kelas emosi yang jarang muncul.
- **Training Regime:** `fp16` (mixed precision) diaktifkan untuk mengurangi penggunaan memori dan mempercepat komputasi di GPU.
- **Optimizer:** `adamw_8bit` dari library `bitsandbytes` digunakan untuk mengurangi jejak memori dari state optimizer secara signifikan.
- **Batching Strategy:** `per_device_train_batch_size` diatur ke **4**, dan `gradient_accumulation_steps` diatur ke **8**. Ini menghasilkan **batch size efektif sebesar 32** (4 x 8), yang memungkinkan training stabil dengan penggunaan memori yang rendah.
- **Learning Rate:** `2e-5` digunakan sebagai titik awal yang solid untuk fine-tuning model besar.
- **LR Scheduler:** `cosine`, untuk membantu model mencapai konvergensi yang lebih halus.
- **Epochs:** Model dilatih selama **3 epoch**. Evaluasi dilakukan di setiap akhir epoch untuk memonitor performa dan memilih checkpoint terbaik. |
alexandertam/babylm-hulda50-10m-roberta-base | alexandertam | 2025-06-16T11:54:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-06-16T11:53:40Z | ---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: babylm-hulda50-10m-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# babylm-hulda50-10m-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.6004
- Accuracy: 0.1688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- training_steps: 9000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 10.4051 | 0.1139 | 100 | 9.7325 | 0.0476 |
| 9.4597 | 0.2278 | 200 | 9.0953 | 0.0526 |
| 8.7922 | 0.3417 | 300 | 8.3668 | 0.0752 |
| 8.0662 | 0.4556 | 400 | 7.6392 | 0.0853 |
| 7.3196 | 0.5695 | 500 | 6.9607 | 0.1033 |
| 6.7508 | 0.6834 | 600 | 6.5195 | 0.1140 |
| 6.4206 | 0.7973 | 700 | 6.3211 | 0.1240 |
| 6.2816 | 0.9112 | 800 | 6.2062 | 0.1318 |
| 6.1826 | 1.0251 | 900 | 6.1258 | 0.1360 |
| 6.1021 | 1.1390 | 1000 | 6.0655 | 0.1389 |
| 5.7741 | 2.2779 | 2000 | 5.7992 | 0.1567 |
| 5.6606 | 3.4169 | 3000 | 5.7170 | 0.1613 |
| 5.5853 | 4.5558 | 4000 | 5.6723 | 0.1629 |
| 5.579 | 5.6948 | 5000 | 5.6495 | 0.1645 |
| 5.5291 | 6.8337 | 6000 | 5.6294 | 0.1661 |
| 5.5188 | 7.9727 | 7000 | 5.6190 | 0.1671 |
| 5.4853 | 9.1116 | 8000 | 5.6108 | 0.1680 |
| 5.5257 | 10.2506 | 9000 | 5.5980 | 0.1695 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
kicoi/ppo-LunarLander-v2 | kicoi | 2025-06-16T11:53:32Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-16T11:14:35Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.43 +/- 24.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.25_0.25_0.75_epoch1 | MinaMila | 2025-06-16T11:53:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T11:51:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Huihui-MoE-24B-A8B-abliterated-Q6_K-GGUF | Triangle104 | 2025-06-16T11:53:01Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T11:43:31Z | ---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-24B-A8B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
โ**Risk of Sensitive or Controversial Outputs**โ: This modelโs safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
โ**Not Suitable for All Audiences**:โ Due to limited content filtering, the modelโs
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
โ**Legal and Ethical Responsibilities**โ: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
โ**Research and Experimental Use**โ: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
โ**Monitoring and Review Recommendations**โ: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
โ**No Default Safety Guarantees**โ: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Huihui-MoE-24B-A8B-abliterated-Q6_K-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-24B-A8B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) for more details on the model.
---
Huihui-MoE-24B-A8B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Qwen3-8B-abliterated base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 4 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
This model combines four ablated models, and perhaps it can achieve the performance of all the ablated models?
This is just a test. The exploration of merging different manifestations of models of the same type is another possibility.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q6_K-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q6_K-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q6_K-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q6_K-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q6_k.gguf -c 2048
```
|
Shah-Sapna-Kumari-Official-Viral-Videos/FULL.VIDEO.Sapna.Shah.Viral.Video.Tutorial.Official | Shah-Sapna-Kumari-Official-Viral-Videos | 2025-06-16T11:52:58Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T11:52:50Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
dhruvsangani/FeatSystems_QA_Model | dhruvsangani | 2025-06-16T11:52:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T11:52:45Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dhruvsangani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tomaarsen/splade-cocondenser-ensembledistil-nq | tomaarsen | 2025-06-16T11:52:22Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sparse-encoder",
"sparse",
"splade",
"generated_from_trainer",
"dataset_size:99000",
"loss:SpladeLoss",
"loss:SparseMultipleNegativesRankingLoss",
"loss:FlopsLoss",
"feature-extraction",
"en",
"dataset:sentence-transformers/natural-questions",
"arxiv:1908.10084",
"arxiv:2205.04733",
"arxiv:1705.00652",
"arxiv:2004.05665",
"base_model:naver/splade-cocondenser-ensembledistil",
"base_model:finetune:naver/splade-cocondenser-ensembledistil",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-06-16T11:52:07Z | ---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- splade
- generated_from_trainer
- dataset_size:99000
- loss:SpladeLoss
- loss:SparseMultipleNegativesRankingLoss
- loss:FlopsLoss
base_model: naver/splade-cocondenser-ensembledistil
widget:
- text: Rollin' (Limp Bizkit song) The music video was filmed atop the South Tower
of the former World Trade Center in New York City. The introduction features Ben
Stiller and Stephen Dorff mistaking Fred Durst for the valet and giving him the
keys to their Bentley Azure. Also making a cameo is break dancer Mr. Wiggles.
The rest of the video has several cuts to Durst and his bandmates hanging out
of the Bentley as they drive about Manhattan. The song Ben Stiller is playing
at the beginning is "My Generation" from the same album. The video also features
scenes of Fred Durst with five girls dancing in a room. The video was filmed around
the same time as the film Zoolander, which explains Stiller and Dorff's appearance.
Fred Durst has a small cameo in that film.
- text: 'Maze Runner: The Death Cure On April 22, 2017, the studio delayed the release
date once again, to February 9, 2018, in order to allow more time for post-production;
months later, on August 25, the studio moved the release forward two weeks.[17]
The film will premiere on January 26, 2018 in 3D, IMAX and IMAX 3D.[18][19]'
- text: who played the dj in the movie the warriors
- text: Lionel Messi Born and raised in central Argentina, Messi was diagnosed with
a growth hormone deficiency as a child. At age 13, he relocated to Spain to join
Barcelona, who agreed to pay for his medical treatment. After a fast progression
through Barcelona's youth academy, Messi made his competitive debut aged 17 in
October 2004. Despite being injury-prone during his early career, he established
himself as an integral player for the club within the next three years, finishing
2007 as a finalist for both the Ballon d'Or and FIFA World Player of the Year
award, a feat he repeated the following year. His first uninterrupted campaign
came in the 2008รขโฌโ09 season, during which he helped Barcelona achieve the first
treble in Spanish football. At 22 years old, Messi won the Ballon d'Or and FIFA
World Player of the Year award by record voting margins.
- text: 'Send In the Clowns "Send In the Clowns" is a song written by Stephen Sondheim
for the 1973 musical A Little Night Music, an adaptation of Ingmar Bergman''s
film Smiles of a Summer Night. It is a ballad from Act Two, in which the character
Desirรฉe reflects on the ironies and disappointments of her life. Among other things,
she looks back on an affair years earlier with the lawyer Fredrik, who was deeply
in love with her but whose marriage proposals she had rejected. Meeting him after
so long, she realizes she is in love with him and finally ready to marry him,
but now it is he who rejects her: he is in an unconsummated marriage with a much
younger woman. Desirรฉe proposes marriage to rescue him from this situation, but
he declines, citing his dedication to his bride. Reacting to his rejection, Desirรฉe
sings this song. The song is later reprised as a coda after Fredrik''s young wife
runs away with his son, and Fredrik is finally free to accept Desirรฉe''s offer.[1]'
datasets:
- sentence-transformers/natural-questions
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
- query_active_dims
- query_sparsity_ratio
- corpus_active_dims
- corpus_sparsity_ratio
co2_eq_emissions:
emissions: 58.16202595921652
energy_consumed: 0.14963140390123159
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.4
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: naver/splade-cocondenser-ensembledistil finetuned on Natural-Questions tuples
results:
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: dot_accuracy@1
value: 0.42
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.66
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.78
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.86
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.42
name: Dot Precision@1
- type: dot_precision@3
value: 0.22
name: Dot Precision@3
- type: dot_precision@5
value: 0.15600000000000003
name: Dot Precision@5
- type: dot_precision@10
value: 0.086
name: Dot Precision@10
- type: dot_recall@1
value: 0.42
name: Dot Recall@1
- type: dot_recall@3
value: 0.66
name: Dot Recall@3
- type: dot_recall@5
value: 0.78
name: Dot Recall@5
- type: dot_recall@10
value: 0.86
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6354826541219594
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5633571428571428
name: Dot Mrr@10
- type: dot_map@100
value: 0.5707568446046707
name: Dot Map@100
- type: query_active_dims
value: 80.04000091552734
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9973776292210363
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 187.86317443847656
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9938449913361354
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: dot_accuracy@1
value: 0.42
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.58
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.64
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.74
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.42
name: Dot Precision@1
- type: dot_precision@3
value: 0.38
name: Dot Precision@3
- type: dot_precision@5
value: 0.348
name: Dot Precision@5
- type: dot_precision@10
value: 0.292
name: Dot Precision@10
- type: dot_recall@1
value: 0.06151521998742045
name: Dot Recall@1
- type: dot_recall@3
value: 0.09888886518769423
name: Dot Recall@3
- type: dot_recall@5
value: 0.11905430040543279
name: Dot Recall@5
- type: dot_recall@10
value: 0.15207487151594892
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.3662320514229047
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.524547619047619
name: Dot Mrr@10
- type: dot_map@100
value: 0.17259980178609616
name: Dot Map@100
- type: query_active_dims
value: 82.4800033569336
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9972976868043728
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 292.8273010253906
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9904060251285831
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: dot_accuracy@1
value: 0.52
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.72
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.8
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.84
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.52
name: Dot Precision@1
- type: dot_precision@3
value: 0.24
name: Dot Precision@3
- type: dot_precision@5
value: 0.16799999999999998
name: Dot Precision@5
- type: dot_precision@10
value: 0.08999999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.5
name: Dot Recall@1
- type: dot_recall@3
value: 0.68
name: Dot Recall@3
- type: dot_recall@5
value: 0.77
name: Dot Recall@5
- type: dot_recall@10
value: 0.8
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6646829822638323
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6318571428571428
name: Dot Mrr@10
- type: dot_map@100
value: 0.6182603061292137
name: Dot Map@100
- type: query_active_dims
value: 79.45999908447266
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9973966319676144
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 208.1573028564453
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9931800896777261
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: dot_accuracy@1
value: 0.4533333333333333
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.6533333333333333
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.7399999999999999
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.8133333333333334
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.4533333333333333
name: Dot Precision@1
- type: dot_precision@3
value: 0.27999999999999997
name: Dot Precision@3
- type: dot_precision@5
value: 0.22399999999999998
name: Dot Precision@5
- type: dot_precision@10
value: 0.156
name: Dot Precision@10
- type: dot_recall@1
value: 0.3271717399958068
name: Dot Recall@1
- type: dot_recall@3
value: 0.47962962172923146
name: Dot Recall@3
- type: dot_recall@5
value: 0.5563514334684776
name: Dot Recall@5
- type: dot_recall@10
value: 0.604024957171983
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5554658959362321
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5732539682539682
name: Dot Mrr@10
- type: dot_map@100
value: 0.45387231750666013
name: Dot Map@100
- type: query_active_dims
value: 80.66000111897786
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9973573159976744
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 219.49083174763393
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9928087664062764
name: Corpus Sparsity Ratio
---
# naver/splade-cocondenser-ensembledistil finetuned on Natural-Questions tuples
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [naver/splade-cocondenser-ensembledistil](https://huggingface.co/naver/splade-cocondenser-ensembledistil) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** SPLADE Sparse Encoder
- **Base model:** [naver/splade-cocondenser-ensembledistil](https://huggingface.co/naver/splade-cocondenser-ensembledistil) <!-- at revision 25178a62708a3ab1b5c4b5eb30764d65bfddcfbb -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 30522 dimensions
- **Similarity Function:** Dot Product
- **Training Dataset:**
- [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False}) with MLMTransformer model: BertForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the ๐ค Hub
model = SparseEncoder("tomaarsen/splade-cocondenser-ensembledistil-nq")
# Run inference
queries = [
"is send in the clowns from a musical",
]
documents = [
'Send In the Clowns "Send In the Clowns" is a song written by Stephen Sondheim for the 1973 musical A Little Night Music, an adaptation of Ingmar Bergman\'s film Smiles of a Summer Night. It is a ballad from Act Two, in which the character Desirรฉe reflects on the ironies and disappointments of her life. Among other things, she looks back on an affair years earlier with the lawyer Fredrik, who was deeply in love with her but whose marriage proposals she had rejected. Meeting him after so long, she realizes she is in love with him and finally ready to marry him, but now it is he who rejects her: he is in an unconsummated marriage with a much younger woman. Desirรฉe proposes marriage to rescue him from this situation, but he declines, citing his dedication to his bride. Reacting to his rejection, Desirรฉe sings this song. The song is later reprised as a coda after Fredrik\'s young wife runs away with his son, and Fredrik is finally free to accept Desirรฉe\'s offer.[1]',
'The Suite Life on Deck The Suite Life on Deck is an American sitcom that aired on Disney Channel from September 26, 2008 to May 6, 2011. It is a sequel/spin-off of the Disney Channel Original Series The Suite Life of Zack & Cody. The series follows twin brothers Zack and Cody Martin and hotel heiress London Tipton in a new setting, the SS Tipton, where they attend classes at "Seven Seas High School" and meet Bailey Pickett while Mr. Moseby manages the ship. The ship travels around the world to nations such as Italy, France, Greece, India, Sweden and the United Kingdom where the characters experience different cultures, adventures, and situations.[1]',
'Money in the Bank ladder match The first match was contested in 2005 at WrestleMania 21, after being invented (in kayfabe) by Chris Jericho.[1] At the time, it was exclusive to wrestlers of the Raw brand, and Edge won the inaugural match.[1] From then until 2010, the Money in the Bank ladder match, now open to all WWE brands, became a WrestleMania mainstay. 2010 saw a second and third Money in the Bank ladder match when the Money in the Bank pay-per-view debuted in July. Unlike the matches at WrestleMania, this new event featured two such ladder matches รขโฌโ one each for a contract for the WWE Championship and World Heavyweight Championship, respectively.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[38.2508, 3.8446, 1.6656]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Sparse Information Retrieval
* Datasets: `NanoMSMARCO`, `NanoNFCorpus` and `NanoNQ`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
| Metric | NanoMSMARCO | NanoNFCorpus | NanoNQ |
|:----------------------|:------------|:-------------|:-----------|
| dot_accuracy@1 | 0.42 | 0.42 | 0.52 |
| dot_accuracy@3 | 0.66 | 0.58 | 0.72 |
| dot_accuracy@5 | 0.78 | 0.64 | 0.8 |
| dot_accuracy@10 | 0.86 | 0.74 | 0.84 |
| dot_precision@1 | 0.42 | 0.42 | 0.52 |
| dot_precision@3 | 0.22 | 0.38 | 0.24 |
| dot_precision@5 | 0.156 | 0.348 | 0.168 |
| dot_precision@10 | 0.086 | 0.292 | 0.09 |
| dot_recall@1 | 0.42 | 0.0615 | 0.5 |
| dot_recall@3 | 0.66 | 0.0989 | 0.68 |
| dot_recall@5 | 0.78 | 0.1191 | 0.77 |
| dot_recall@10 | 0.86 | 0.1521 | 0.8 |
| **dot_ndcg@10** | **0.6355** | **0.3662** | **0.6647** |
| dot_mrr@10 | 0.5634 | 0.5245 | 0.6319 |
| dot_map@100 | 0.5708 | 0.1726 | 0.6183 |
| query_active_dims | 80.04 | 82.48 | 79.46 |
| query_sparsity_ratio | 0.9974 | 0.9973 | 0.9974 |
| corpus_active_dims | 187.8632 | 292.8273 | 208.1573 |
| corpus_sparsity_ratio | 0.9938 | 0.9904 | 0.9932 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco",
"nfcorpus",
"nq"
]
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.4533 |
| dot_accuracy@3 | 0.6533 |
| dot_accuracy@5 | 0.74 |
| dot_accuracy@10 | 0.8133 |
| dot_precision@1 | 0.4533 |
| dot_precision@3 | 0.28 |
| dot_precision@5 | 0.224 |
| dot_precision@10 | 0.156 |
| dot_recall@1 | 0.3272 |
| dot_recall@3 | 0.4796 |
| dot_recall@5 | 0.5564 |
| dot_recall@10 | 0.604 |
| **dot_ndcg@10** | **0.5555** |
| dot_mrr@10 | 0.5733 |
| dot_map@100 | 0.4539 |
| query_active_dims | 80.66 |
| query_sparsity_ratio | 0.9974 |
| corpus_active_dims | 219.4908 |
| corpus_sparsity_ratio | 0.9928 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 99,000 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 131.81 tokens</li><li>max: 450 tokens</li></ul> |
* Samples:
| query | answer |
|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> |
| <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> |
| <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
"lambda_corpus": 3e-05,
"lambda_query": 5e-05
}
```
### Evaluation Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 1,000 evaluation samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 134.01 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | answer |
|:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/หtaษชbษr/, Latin: Tiberis,[1] Italian: Tevere [หteหvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252ย mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709ย sqย mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> |
| <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> |
| <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
"lambda_corpus": 3e-05,
"lambda_query": 5e-05
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 |
|:------:|:----:|:-------------:|:---------------:|:-----------------------:|:------------------------:|:------------------:|:-------------------------:|
| -1 | -1 | - | - | 0.6780 | 0.3645 | 0.7145 | 0.5857 |
| 0.0323 | 200 | 0.0131 | - | - | - | - | - |
| 0.0646 | 400 | 0.0126 | - | - | - | - | - |
| 0.0970 | 600 | 0.0111 | - | - | - | - | - |
| 0.1293 | 800 | 0.0116 | - | - | - | - | - |
| 0.1616 | 1000 | 0.0097 | 0.0268 | 0.6294 | 0.3534 | 0.6406 | 0.5411 |
| 0.1939 | 1200 | 0.0112 | - | - | - | - | - |
| 0.2262 | 1400 | 0.0153 | - | - | - | - | - |
| 0.2586 | 1600 | 0.0188 | - | - | - | - | - |
| 0.2909 | 1800 | 0.0124 | - | - | - | - | - |
| 0.3232 | 2000 | 0.0086 | 0.0341 | 0.6441 | 0.3527 | 0.6533 | 0.5500 |
| 0.3555 | 2200 | 0.0179 | - | - | - | - | - |
| 0.3878 | 2400 | 0.0168 | - | - | - | - | - |
| 0.4202 | 2600 | 0.0153 | - | - | - | - | - |
| 0.4525 | 2800 | 0.0204 | - | - | - | - | - |
| 0.4848 | 3000 | 0.007 | 0.0288 | 0.6608 | 0.3426 | 0.6768 | 0.5601 |
| 0.5171 | 3200 | 0.0105 | - | - | - | - | - |
| 0.5495 | 3400 | 0.0198 | - | - | - | - | - |
| 0.5818 | 3600 | 0.0077 | - | - | - | - | - |
| 0.6141 | 3800 | 0.0121 | - | - | - | - | - |
| 0.6464 | 4000 | 0.0091 | 0.0331 | 0.6524 | 0.3547 | 0.6736 | 0.5602 |
| 0.6787 | 4200 | 0.0095 | - | - | - | - | - |
| 0.7111 | 4400 | 0.011 | - | - | - | - | - |
| 0.7434 | 4600 | 0.0077 | - | - | - | - | - |
| 0.7757 | 4800 | 0.0032 | - | - | - | - | - |
| 0.8080 | 5000 | 0.0061 | 0.0214 | 0.6537 | 0.3596 | 0.6689 | 0.5607 |
| 0.8403 | 5200 | 0.009 | - | - | - | - | - |
| 0.8727 | 5400 | 0.0045 | - | - | - | - | - |
| 0.9050 | 5600 | 0.0093 | - | - | - | - | - |
| 0.9373 | 5800 | 0.0086 | - | - | - | - | - |
| 0.9696 | 6000 | 0.0091 | 0.0244 | 0.6369 | 0.3652 | 0.6647 | 0.5556 |
| -1 | -1 | - | - | 0.6355 | 0.3662 | 0.6647 | 0.5555 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.150 kWh
- **Carbon Emitted**: 0.058 kg of CO2
- **Hours Used**: 0.4 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### SpladeLoss
```bibtex
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stรฉphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
```
#### SparseMultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### FlopsLoss
```bibtex
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
xzkb/ppo-CartPole-v1 | xzkb | 2025-06-16T11:51:34Z | 0 | 0 | null | [
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-16T11:51:24Z | ---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 253.80 +/- 132.79
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'notebook'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'xzkb/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
Somalitts/6aad | Somalitts | 2025-06-16T11:51:29Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-06-16T10:36:16Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: 6aad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6aad
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3844 | 0.0142 | 100 | 0.3050 |
| 0.2729 | 0.0284 | 200 | 0.2091 |
| 0.2174 | 0.0427 | 300 | 0.1598 |
| 0.1954 | 0.0569 | 400 | 0.1394 |
| 0.1895 | 0.0711 | 500 | 0.1322 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
praa/GenAMI | praa | 2025-06-16T11:49:30Z | 0 | 0 | null | [
"safetensors",
"xlm-roberta",
"sexism",
"hate",
"indic",
"empowerment",
"gender",
"text-classification",
"en",
"hi",
"mr",
"bn",
"ta",
"te",
"ml",
"ur",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"doi:10.57967/hf/5784",
"license:cc-by-nc-nd-4.0",
"region:us"
] | text-classification | 2025-06-13T17:37:25Z | ---
license: cc-by-nc-nd-4.0
metrics:
- f1
base_model:
- FacebookAI/xlm-roberta-base
language:
- en
- hi
- mr
- bn
- ta
- te
- ml
- ur
pipeline_tag: text-classification
tags:
- sexism
- hate
- indic
- empowerment
- gender
---
# Model Card for Model ID
Classifies polarised gendered discourse for all indic languages.
0=Neutral
1=Sexist and misogynistic
2=Empowering
## Model Details
Come back later at an unspecified date for further information
## Author Details
Praachi Kumar
Research Fellow
United Nations University - MERIT
### Model Description
- **Developed by:** Praachi Kumar
- **Model type:** Fine-tuned XLM-RoBERTa base for sequence classification
- **Language(s) (NLP):** Multi Indic Languages
- **License:** Non commercial, no derrivatives
- **Paper:** forthcoming
## Uses
Social science research, intended for academic use - 3 labels
## Bias, Risks, and Limitations
Single annotator coded the data. Painstakingly.
### Recommendations
Please contact me at [email protected] for instructions on further use
## How to Get Started with the Model
With a bit of patience, I am still working on this model card
## Training Details
### Training Data
Please contact me at [email protected] for this information
#### Metrics
Macro Average F1 Score: 0.83
Balanced Accuracy: 0.33
### Results
Come back later for an updated model card
## Citation
**Model**
**BibTeX:**
@misc{genami2025,
author = {Praachi Kumar},
title = {genAMI},
year = {2025},
month = {March},
day = {13},
howpublished = {\url{https://doi.org/10.57967/hf/5784}}
}
**APA:** Kumar, P. (2025). genAMI. Hugging Face. https://doi.org/10.57967/hf/5784
**Paper**: Forthcoming, in my PhD Thesis :) |
LaaP-ai/donut-base-invoice-v1.07 | LaaP-ai | 2025-06-16T11:49:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-16T11:49:11Z | ---
library_name: transformers
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
model-index:
- name: donut-base-invoice-v1.07
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-invoice-v1.07
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
CreitinGameplays/Mistral-Nemo-12B-R1-v0.1 | CreitinGameplays | 2025-06-16T11:49:24Z | 52 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:CreitinGameplays/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70Bmistral",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:finetune:mistralai/Mistral-Nemo-Instruct-2407",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-21T19:17:13Z | ---
license: mit
datasets:
- CreitinGameplays/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70Bmistral
language:
- en
base_model:
- mistralai/Mistral-Nemo-Instruct-2407
pipeline_tag: text-generation
library_name: transformers
new_version: CreitinGameplays/Mistral-Nemo-12B-R1-v0.2
---
## Mistral Nemo 12B R1

Took **96 hours** to finetune on **2x Nvidia RTX A6000** with the following settings:
- Batch size: 3
- Gradient accumulation steps: 1
- Epochs: 1
- Learning rate: 1e-4
- Warmup ratio: 0.1
Run the model:
```python
import torch
from transformers import pipeline
model_id = "CreitinGameplays/Mistral-Nemo-12B-R1-v0.1"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "How many r's are in strawberry?"}
]
outputs = pipe(
messages,
temperature=0.6,
top_p=1.0,
top_k=50,
repetition_penalty=1.1,
max_new_tokens=2048
)
print(outputs[0]["generated_text"][-1])
```
Recommended system prompt:
```
You are an AI focused on providing systematic, well-reasoned responses. Response Structure: - Format: <think>{reasoning}</think>{answer} - Process: Think first, then answer.
```
Response example:
```
<s>[INST]You are an AI focused on providing systematic, well-reasoned responses. Response Structure: - Format: <think>{reasoning}</think>{answer} - Process: Think first, then answer.
what is a nvidia tesla t4[/INST]<think>Okay, so I need to figure out what a NVIDIA Tesla T4 is. I remember that NVIDIA makes GPUs for various purposes, but I'm not exactly sure about the specifics of the Tesla models. Let me start by recalling what I know about GPUs in general. They're used for graphics rendering, right? But I also heard that they can be used for other tasks like machine learning and deep learning.
I think the Tesla series from NVIDIA is specifically designed for these non-gaming applications. So, the Tesla T4 must be one of those. I've heard terms like Tensor cores before, which I believe are specialized hardware units inside some GPUs for accelerating certain operations. Maybe the T4 has Tensor cores?
Let me try to break it down. The name "Tesla" suggests it's part of their data center or professional line. The number "4" probably indicates it's the fourth generation or something similar. I should check if there are previous versions like T1, T2, etc., to see where it stands in the lineup.
I remember that the Tesla V100 was a high-end model with a lot of memory and performance. The T4 might be a step below that or perhaps a different type altogether. Maybe it's more affordable or targeted at specific use cases.
I think the T4 is known for its support for mixed precision training, which allows using both floating-point and half-precision numbers together. This could make training neural networks faster and more efficient. It might also have features like NVLink for connecting multiple GPUs, which would be useful for distributed computing.
Looking at specs, I believe the T4 has 320 Tensor cores. Each Tensor core can perform matrix multiplications much faster than regular CUDA cores. So, it's optimized for machine learning tasks. The amount of memory is probably around 16 GB, which is less than the V100's 16 GB or 32 GB, but still sufficient for many applications.
I also recall that the T4 supports NVIDIA's GPU virtualization technology, which allows multiple VMs to share a single GPU. That would be great for cloud providers who want to maximize resource utilization.
In terms of performance, the T4 is likely slower than the V100 but faster than older models like the P400. It might be used in applications where cost is a factor, but performance isn't as critical as in high-end research or enterprise settings.
I should also consider the form factors. I think the T4 comes in PCIe cards, maybe with different sizes depending on the applicationโlike full-height or low-profile. It might also be available in server boards or as part of pre-built systems.
Another thing is power consumption. The T4 is probably more energy-efficient than earlier models, which is important for data centers looking to reduce costs and environmental impact.
I wonder how it compares to AMD's offerings. I know AMD has GPUs like the Radeon Instinct, but I'm not sure how they stack up against the Tesla T4 in terms of performance and features.
In summary, the Tesla T4 is a professional-grade GPU designed for data centers and machine learning tasks. It features Tensor cores for acceleration, supports mixed precision, and has enough memory and connectivity options to handle modern workloads efficiently.
</think>
The NVIDIA Tesla T4 is a professional-grade GPU designed for data centers and machine learning tasks. Key features include:
1. **Tensor Cores**: Equipped with 320 Tensor cores, each optimizing matrix multiplications for accelerated machine learning tasks.
2. **Mixed Precision Training**: Supports simultaneous use of floating-point and half-precision numbers, enhancing efficiency in neural network training.
3. **Memory and Connectivity**: Offers 16 GB of memory and NVLink support for connecting multiple GPUs, ideal for distributed computing.
4. **GPU Virtualization**: Enables multiple virtual machines to share a single GPU, maximizing resource utilization in cloud environments.
5. **Performance and Power**: Slower than the V100 but faster than older models, with improved energy efficiency suitable for cost-conscious data centers.
6. **Form Factors**: Available in PCIe cards with varying sizes, including full-height and low-profile options, along with server board and pre-built system configurations.
7. **Comparison with AMD**: While the T4 competes with AMD's Radeon Instinct GPUs, it offers unique features tailored to data center and machine learning needs.
In conclusion, the Tesla T4 is a versatile solution for data centers seeking balance between performance, efficiency, and affordability. Its features make it suitable for a range of applications, from cloud services to machine learning projects.</s>
```
**Note**: The model was mainly finetuned on English dataset, meaning the model may not perform well in other languages; The model may enter an infinite response loop after the reasoning step. |
asimov-ai/Magistral-Small-3.1-documentazione | asimov-ai | 2025-06-16T11:49:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T11:42:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
punxti/q-FrozenLake-v1-4x4-noSlippery | punxti | 2025-06-16T11:49:19Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-16T11:40:38Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="punxti/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
TV-nulook-india-viral-videos/Original.Full.Clip.nulook.india.Viral.Video.Leaks.Official | TV-nulook-india-viral-videos | 2025-06-16T11:48:40Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T11:48:32Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
pprraattiikk9900/results | pprraattiikk9900 | 2025-06-16T11:48:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T11:47:52Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0015
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
nnilayy/deap-arousal-binary-classification-no-wd-Kfold-3 | nnilayy | 2025-06-16T11:48:20Z | 4 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-05-27T04:34:44Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
sdiazlor/modernbert-embed-base-crossencoder-human-rights | sdiazlor | 2025-06-16T11:46:55Z | 46 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"cross-encoder",
"sentence-transformers",
"text-ranking",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-ranking | 2025-01-19T21:06:10Z | ---
library_name: transformers
tags:
- cross-encoder
- sentence-transformers
pipeline_tag: text-ranking
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/llm-jp-3.1-8x13b-instruct4-8bit | mlx-community | 2025-06-16T11:45:36Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"ja",
"base_model:llm-jp/llm-jp-3.1-8x13b-instruct4",
"base_model:quantized:llm-jp/llm-jp-3.1-8x13b-instruct4",
"license:apache-2.0",
"8-bit",
"region:us"
] | text-generation | 2025-06-16T09:42:56Z | ---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
library_name: mlx
inference: false
base_model: llm-jp/llm-jp-3.1-8x13b-instruct4
tags:
- mlx
---
# mlx-community/llm-jp-3.1-8x13b-instruct4-8bit
This model [mlx-community/llm-jp-3.1-8x13b-instruct4-8bit](https://huggingface.co/mlx-community/llm-jp-3.1-8x13b-instruct4-8bit) was
converted to MLX format from [llm-jp/llm-jp-3.1-8x13b-instruct4](https://huggingface.co/llm-jp/llm-jp-3.1-8x13b-instruct4)
using mlx-lm version **0.24.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/llm-jp-3.1-8x13b-instruct4-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.15_0.25_epoch1 | MinaMila | 2025-06-16T11:44:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T11:42:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fc91/CQoT_SFT_merged_16bit_lora-Llama-3.2-3B-Instruct-v0.3 | fc91 | 2025-06-16T11:43:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:fc91/CQoT_SFT_merged_16bit_lora-Llama-3.2-3B-Instruct-v0.2",
"base_model:finetune:fc91/CQoT_SFT_merged_16bit_lora-Llama-3.2-3B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T11:36:56Z | ---
base_model: fc91/CQoT_SFT_merged_16bit_lora-Llama-3.2-3B-Instruct-v0.2
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
RL trained on the openai/gsm8k dataset (sample from 1308 until 1468). |
23ikram/llama3b | 23ikram | 2025-06-16T11:43:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T11:42:34Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** 23ikram
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jarvis1111/DoctorAgent-RL | Jarvis1111 | 2025-06-16T11:41:35Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T11:39:02Z | ---
license: apache-2.0
---
|
prakod/codemix-test | prakod | 2025-06-16T11:40:36Z | 77 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:ai4bharat/IndicBART",
"base_model:finetune:ai4bharat/IndicBART",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-16T08:52:24Z | ---
library_name: transformers
base_model: ai4bharat/IndicBART
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: codemix-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codemix-test
This model is a fine-tuned version of [ai4bharat/IndicBART](https://huggingface.co/ai4bharat/IndicBART) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5032
- Bleu: 17.4363
- Gen Len: 20.978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 5.8666 | 1.0 | 1004 | 4.9742 | 13.3823 | 21.0 |
| 4.8088 | 2.0 | 2008 | 4.0212 | 15.3375 | 21.0 |
| 4.2735 | 3.0 | 3012 | 3.6499 | 16.3145 | 21.0 |
| 4.0836 | 4.0 | 4016 | 3.5329 | 17.3835 | 20.996 |
| 4.0152 | 5.0 | 5020 | 3.5032 | 17.4363 | 20.978 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
moogin/gemly-lora | moogin | 2025-06-16T11:40:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-pt",
"base_model:finetune:unsloth/gemma-3-1b-pt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T11:39:53Z | ---
base_model: unsloth/gemma-3-1b-pt
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** moogin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-pt
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bysq/autism-assistant-qwen2 | bysq | 2025-06-16T11:38:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"autism",
"assistant",
"chinese",
"unsloth",
"merged-model",
"zh",
"dataset:custom",
"base_model:unsloth/Qwen2.5-1.5B-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-1.5B-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-16T08:57:19Z | ---
language:
- zh
license: apache-2.0
tags:
- autism
- assistant
- chinese
- qwen2
- unsloth
- merged-model
base_model: unsloth/Qwen2.5-1.5B-bnb-4bit
datasets:
- custom
library_name: transformers
---
# ่ช้ญ็่ฏญ่จ็่งฃๅฉๆ (Autism Language Assistant) - ๅฎๆด็
่ฟๆฏไธไธชไธ้จๅธฎๅฉ่ช้ญ็ๅฟ็ซฅ็่งฃ่ฏญ่จ็AIๅฉๆ๏ผๅบไบQwen2.5-1.5Bๆจกๅๅพฎ่ฐ่ๆใ**ๆญค็ๆฌไธบๅฎๆดๅๅนถๆจกๅ๏ผๆ ้้ขๅค็LoRAๆ้ใ**
## ๆจกๅ็น็น
- โ
**ๅฎๆดๆจกๅ**๏ผๅทฒๅฐLoRAๆ้ๅๅนถๅฐๅบ็กๆจกๅไธญ
- โ
**ๅณๅผๅณ็จ**๏ผๆ ้้ขๅค้
็ฝฎๆๆ้ๆไปถ
- โ
**ไธไธ่ฎพ่ฎก**๏ผไธ้จไธบ่ช้ญ็่ฏญ่จ็่งฃๅบๆฏไผๅ
## ๅ่ฝ
่ฏฅๆจกๅ่ฝๅค๏ผ
- ๅๆ่ฏญ่จไธญ็ๆ
ๆๅๆๅพ
- ็จ็ฎๅๆธ
ๆฐ็่ฏญ่จ่งฃ้ๅคๆ่กจ่พพ
- ๆไพๅ
ทไฝ็ๅๅบๅปบ่ฎฎ
- ็่งฃ่ช้ญ็ๆฃ่
็่กจ่พพๆนๅผ
## ไฝฟ็จๆนๆณ
### ๅ
ๅฎ่ฃ
ๅ
ถไปๅบ
```python
!pip install bitsandbytes accelerate
!pip install transformers torch
```
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# ๅ ่ฝฝๆจกๅๅtokenizer
tokenizer = AutoTokenizer.from_pretrained("bysq/autism-assistant-qwen2")
model = AutoModelForCausalLM.from_pretrained("bysq/autism-assistant-qwen2")
# ็งปๅจๅฐGPU๏ผๅฆๆๅฏ็จ๏ผ
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device)
# ไฝฟ็จ็คบไพ
def analyze_expression(original_text, autism_expression):
prompt = f'''ไฝ ๆฏไธไธชไธ้จๅธฎๅฉ็่งฃ่ช้ญ็ๆฃ่
่กจ่พพ็AIๅฉๆใ
ๅๅง่กจ่พพ๏ผ"{original_text}"
่ช้ญ็ๆฃ่
็่กจ่พพ๏ผ"{autism_expression}"
่ฏทๅๆๅนถๅ็ญ๏ผ
- ๆ
ๆๅๆ๏ผ'''
inputs = tokenizer(prompt, return_tensors="pt").to(device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=200,
do_sample=True,
temperature=0.7,
pad_token_id=tokenizer.pad_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response[len(prompt):].strip()
# ๆต่ฏ
result = analyze_expression("ๅฏไปฅๆไธ่ฅฟ็ปๆ๏ผ", "ไธไฝ ")
print(result)
```
## ่ฎธๅฏ่ฏ
Apache 2.0
## ๅ
่ดฃๅฃฐๆ
่ฏฅๆจกๅไป
ไพ่พ
ๅฉไฝฟ็จ๏ผไธ่ฝๆฟไปฃไธไธๅป็ๅปบ่ฎฎใๅปบ่ฎฎๅจไธไธๆๅฏผไธไฝฟ็จใ
|
Triangle104/Huihui-MoE-24B-A8B-abliterated-Q5_K_M-GGUF | Triangle104 | 2025-06-16T11:36:43Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T11:34:19Z | ---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-24B-A8B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
โ**Risk of Sensitive or Controversial Outputs**โ: This modelโs safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
โ**Not Suitable for All Audiences**:โ Due to limited content filtering, the modelโs
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
โ**Legal and Ethical Responsibilities**โ: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
โ**Research and Experimental Use**โ: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
โ**Monitoring and Review Recommendations**โ: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
โ**No Default Safety Guarantees**โ: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Huihui-MoE-24B-A8B-abliterated-Q5_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-24B-A8B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) for more details on the model.
---
Huihui-MoE-24B-A8B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Qwen3-8B-abliterated base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 4 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
This model combines four ablated models, and perhaps it can achieve the performance of all the ablated models?
This is just a test. The exploration of merging different manifestations of models of the same type is another possibility.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q5_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q5_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q5_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q5_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q5_k_m.gguf -c 2048
```
|
quanda-bench-test/f1c529c-default_LDS | quanda-bench-test | 2025-06-16T11:36:28Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-16T11:20:19Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
23ikram/model3b8bit | 23ikram | 2025-06-16T11:35:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T11:35:53Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** 23ikram
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
alexandertam/babylm-hulda50-10m-gpt2 | alexandertam | 2025-06-16T11:34:04Z | 65 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-13T07:59:49Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: babylm-hulda-10m-gpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# babylm-hulda-10m-gpt2
GPT2 Basemodel pretrained on 50% synthetic 50% original data from Babylm2025
It achieves the following results on the evaluation set:
- Loss: 3.5751
- Accuracy: 0.3820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- training_steps: 18000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 8.2384 | 0.1167 | 200 | 7.7935 | 0.1972 |
| 6.9074 | 0.2334 | 400 | 6.5061 | 0.2358 |
| 5.8108 | 0.3501 | 600 | 5.6078 | 0.2549 |
| 5.3823 | 0.4667 | 800 | 5.2613 | 0.2608 |
| 5.1007 | 0.5834 | 1000 | 5.0485 | 0.2705 |
| 4.9232 | 0.7001 | 1200 | 4.9021 | 0.2750 |
| 4.7723 | 0.8168 | 1400 | 4.7936 | 0.2751 |
| 4.6436 | 0.9335 | 1600 | 4.6973 | 0.2822 |
| 4.5757 | 1.0502 | 1800 | 4.6303 | 0.2840 |
| 4.5332 | 1.1669 | 2000 | 4.5745 | 0.2862 |
| 4.0004 | 2.3337 | 4000 | 4.0663 | 0.3348 |
| 3.6798 | 3.5006 | 6000 | 3.8603 | 0.3535 |
| 3.6025 | 4.6674 | 8000 | 3.7458 | 0.3640 |
| 3.5169 | 5.8343 | 10000 | 3.6725 | 0.3715 |
| 3.3989 | 7.0012 | 12000 | 3.6293 | 0.3750 |
| 3.2351 | 8.1680 | 14000 | 3.5991 | 0.3792 |
| 3.2415 | 9.3349 | 16000 | 3.5830 | 0.3811 |
| 3.192 | 10.5018 | 18000 | 3.5751 | 0.3820 |
## BabyLM Challenge 2025 Results
### Results
*Zero-shot*
| Task | Metric | Causal Score |
|-----------------------|------------------|--------------|
| BLiMP | Acc | 65.09 |
| BLiMP Supplement | Acc | 60.34 |
| EWoK | Acc | 49.64 |
| Eye Tracking | Change in Rยฒ | 0.37 |
| Self-paced Reading | Change in Rยฒ | 0.15 |
| Entity Tracking | Acc | 22.05 |
| WUGs | Acc | 50 |
*Finetuning*
| Task | Metric | Score |
|-----------|--------|-------|
| BoolQ | Acc | 64.04 |
| MNLI | Acc | 35.96 |
| MRPC | F1 | 81.05 |
| QQP | F1 | 62.74 |
| MultiRC | Acc | 57.55 |
| RTE | Acc | 53.96 |
| WSC | Acc | 61.54 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
rooclown101/pstmergetool | rooclown101 | 2025-06-16T11:33:05Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T11:33:05Z | ---
license: apache-2.0
---
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.15_0.75_epoch2 | MinaMila | 2025-06-16T11:32:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T11:30:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Gayaz2000/DPOTrained-Llama3.2-Lora-Model | Gayaz2000 | 2025-06-16T11:31:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"region:us"
] | null | 2025-06-16T11:30:10Z | ---
base_model: unsloth/Llama-3.2-1B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
duchao1210/DPO_Qwen25_3B_32_0.05_2000kmap_1e-7 | duchao1210 | 2025-06-16T11:31:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:duchao1210/qwen_2.5_3B_5k_r128",
"base_model:finetune:duchao1210/qwen_2.5_3B_5k_r128",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-14T16:05:18Z | ---
base_model: duchao1210/qwen_2.5_3B_5k_r128
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** duchao1210
- **License:** apache-2.0
- **Finetuned from model :** duchao1210/qwen_2.5_3B_5k_r128
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
joanna302/Qwen3-4B-Base_lo_tt__0.0002 | joanna302 | 2025-06-16T11:28:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:47:42Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
joshbarua/Qwen2.5-7B-base-english-japanese-s1k-full-sft | joshbarua | 2025-06-16T11:25:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T11:21:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shihuai7189/ppo-LunarLander-v2-clip-coef0.2 | shihuai7189 | 2025-06-16T11:25:34Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-10T08:00:01Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -173.73 +/- 90.18
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'notebook'
'seed': 239
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'shihuai7189/ppo-LunarLander-v2-clip-coef0.2'
'batch_size': 512
'minibatch_size': 128}
```
|
dgambettaphd/M_llm2_run2_gen3_WXS_doc1000_synt64_lr1e-04_acm_SYNALL | dgambettaphd | 2025-06-16T11:22:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T11:22:27Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shahink/Arpheus-4B | shahink | 2025-06-16T11:21:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T11:20:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
joshbarua/Qwen3-8B-base-french-s1k-full-sft | joshbarua | 2025-06-16T11:20:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T11:15:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sergioalves/7629e3f5-7cb0-4ed7-8ce0-307a0af304d6 | sergioalves | 2025-06-16T11:19:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-16T10:23:45Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7629e3f5-7cb0-4ed7-8ce0-307a0af304d6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: Qwen/Qwen2.5-1.5B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- c6c33c8265f71baf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.8
group_by_length: false
hub_model_id: sergioalves/7629e3f5-7cb0-4ed7-8ce0-307a0af304d6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.3
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 300
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/c6c33c8265f71baf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d030151f-d4ab-4ec7-9455-d97d323144e0
wandb_project: s56-7
wandb_run: your_name
wandb_runid: d030151f-d4ab-4ec7-9455-d97d323144e0
warmup_steps: 30
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# 7629e3f5-7cb0-4ed7-8ce0-307a0af304d6
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1660
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9609 | 0.0000 | 1 | 1.1693 |
| 1.3796 | 0.0063 | 150 | 1.1672 |
| 1.0046 | 0.0126 | 300 | 1.1660 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_M-GGUF | Triangle104 | 2025-06-16T11:15:09Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T11:12:59Z | ---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-24B-A8B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
โ**Risk of Sensitive or Controversial Outputs**โ: This modelโs safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
โ**Not Suitable for All Audiences**:โ Due to limited content filtering, the modelโs
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
โ**Legal and Ethical Responsibilities**โ: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
โ**Research and Experimental Use**โ: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
โ**Monitoring and Review Recommendations**โ: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
โ**No Default Safety Guarantees**โ: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-24B-A8B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) for more details on the model.
---
Huihui-MoE-24B-A8B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Qwen3-8B-abliterated base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 4 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
This model combines four ablated models, and perhaps it can achieve the performance of all the ablated models?
This is just a test. The exploration of merging different manifestations of models of the same type is another possibility.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q4_k_m.gguf -c 2048
```
|
LaaP-ai/donut-base-invoice-v1.05 | LaaP-ai | 2025-06-16T11:12:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-16T11:11:57Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: donut-base-invoice-v1.05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-invoice-v1.05
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
taguser/flightctl-epoch20-2025-Jun-16-only-tests | taguser | 2025-06-16T11:10:13Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:taguser/flightctl-epoch20-2025-Jun-15-merged",
"base_model:adapter:taguser/flightctl-epoch20-2025-Jun-15-merged",
"license:other",
"region:us"
] | null | 2025-06-16T11:08:22Z | ---
library_name: peft
license: other
base_model: taguser/flightctl-epoch20-2025-Jun-15-merged
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [taguser/flightctl-epoch20-2025-Jun-15-merged](https://huggingface.co/taguser/flightctl-epoch20-2025-Jun-15-merged) on the training_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.0
- Pytorch 2.7.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1 |
mattmdjaga/segformer_b2_clothes | mattmdjaga | 2025-06-16T11:10:06Z | 477,246 | 425 | transformers | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"dataset:mattmdjaga/human_parsing_dataset",
"arxiv:2105.15203",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2022-11-24T09:48:16Z | ---
license: mit
tags:
- vision
- image-segmentation
widget:
- src: https://images.unsplash.com/photo-1643310325061-2beef64926a5?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8Nnx8cmFjb29uc3xlbnwwfHwwfHw%3D&w=1000&q=80
example_title: Person
- src: https://freerangestock.com/sample/139043/young-man-standing-and-leaning-on-car.jpg
example_title: Person
datasets:
- mattmdjaga/human_parsing_dataset
---
# Segformer B2 fine-tuned for clothes segmentation
SegFormer model fine-tuned on [ATR dataset](https://github.com/lemondan/HumanParsing-Dataset) for clothes segmentation but can also be used for human segmentation.
The dataset on hugging face is called "mattmdjaga/human_parsing_dataset".
**[Training code](https://github.com/mattmdjaga/segformer_b2_clothes)**.
```python
from transformers import SegformerImageProcessor, AutoModelForSemanticSegmentation
from PIL import Image
import requests
import matplotlib.pyplot as plt
import torch.nn as nn
processor = SegformerImageProcessor.from_pretrained("mattmdjaga/segformer_b2_clothes")
model = AutoModelForSemanticSegmentation.from_pretrained("mattmdjaga/segformer_b2_clothes")
url = "https://plus.unsplash.com/premium_photo-1673210886161-bfcc40f54d1f?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8cGVyc29uJTIwc3RhbmRpbmd8ZW58MHx8MHx8&w=1000&q=80"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits.cpu()
upsampled_logits = nn.functional.interpolate(
logits,
size=image.size[::-1],
mode="bilinear",
align_corners=False,
)
pred_seg = upsampled_logits.argmax(dim=1)[0]
plt.imshow(pred_seg)
```
Labels: 0: "Background", 1: "Hat", 2: "Hair", 3: "Sunglasses", 4: "Upper-clothes", 5: "Skirt", 6: "Pants", 7: "Dress", 8: "Belt", 9: "Left-shoe", 10: "Right-shoe", 11: "Face", 12: "Left-leg", 13: "Right-leg", 14: "Left-arm", 15: "Right-arm", 16: "Bag", 17: "Scarf"
### Evaluation
| Label Index | Label Name | Category Accuracy | Category IoU |
|:-------------:|:----------------:|:-----------------:|:------------:|
| 0 | Background | 0.99 | 0.99 |
| 1 | Hat | 0.73 | 0.68 |
| 2 | Hair | 0.91 | 0.82 |
| 3 | Sunglasses | 0.73 | 0.63 |
| 4 | Upper-clothes | 0.87 | 0.78 |
| 5 | Skirt | 0.76 | 0.65 |
| 6 | Pants | 0.90 | 0.84 |
| 7 | Dress | 0.74 | 0.55 |
| 8 | Belt | 0.35 | 0.30 |
| 9 | Left-shoe | 0.74 | 0.58 |
| 10 | Right-shoe | 0.75 | 0.60 |
| 11 | Face | 0.92 | 0.85 |
| 12 | Left-leg | 0.90 | 0.82 |
| 13 | Right-leg | 0.90 | 0.81 |
| 14 | Left-arm | 0.86 | 0.74 |
| 15 | Right-arm | 0.82 | 0.73 |
| 16 | Bag | 0.91 | 0.84 |
| 17 | Scarf | 0.63 | 0.29 |
Overall Evaluation Metrics:
- Evaluation Loss: 0.15
- Mean Accuracy: 0.80
- Mean IoU: 0.69
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
nvidia/PartPacker | nvidia | 2025-06-16T11:09:25Z | 0 | 3 | null | [
"en",
"arxiv:2506.09980",
"license:other",
"region:us"
] | null | 2025-05-27T14:46:37Z | ---
license: other
license_name: nvidia-non-commercial-license
license_link: https://huggingface.co/nvidia/PartPacker/blob/main/LICENSE
language:
- en
---
# Model Card for PartPacker
## Description
PartPacker is a three-dimensional (3D) generation model that is able to generate part-level 3D objects from single-view images.
We introduce a dual volume packing strategy that organizes all parts into two complementary volumes, allowing for the creation of complete and interleaved parts that assemble into the final object.
This model is ready for non-commercial use.
## License/Terms of Use
[NVIDIA Non-Commercial License](https://huggingface.co/nvidia/PartPacker/blob/main/LICENSE)
## Deployment Geography
Global
## Use Case
PartPacker takes a single input image and generates a 3D shape with an arbitrary number of complete parts. Each part can be separated and edited independently to facilitate downstream tasks such as editing and animation.
It's intended to be used by researchers and academics to develop new 3D generation methods.
## Release Date
* Github: 06/11/2025 via [https://github.com/NVlabs/PartPacker](https://github.com/NVlabs/PartPacker)
* Huggingface: 06/11/2025 via [https://huggingface.co/NVlabs/PartPacker](https://huggingface.co/NVlabs/PartPacker)
## Reference(s)
[Code](https://github.com/NVlabs/PartPacker)
[Paper](https://arxiv.org/abs/2506.09980)
## Model Architecture
**Architecture Type:** Transformer
**Network Architecture:** Diffusion Transformer (DiT)
## Input
**Input Type(s):** Image
**Input Format(s):** Red, Green, Blue (RGB)
**Input Parameters:** Two-dimensional (2D) image
**Other Properties Related to Input:** Resolution will be resized to 518x518.
## Output
**Output Type(s):** Triangle Mesh
**Output Format:** GL Transmission Format Binary (GLB)
**Output Parameters:** Three-dimensional (3D) triangle mesh
**Other Properties Related to Output:** Extracted at a resolution up to 512^3; without texture.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIAโs hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
## Software Integration
### Runtime Engine(s)
* PyTorch
### Supported Hardware Microarchitecture Compatibility
* NVIDIA Ampere
* NVIDIA Hopper
### Preferred Operating System(s)
* Linux
## Model Version(s)
v1.0
## Training, Testing, and Evaluation Datasets
We perform training, testing, and evaluation on the Objaverse-XL dataset.
For the VAE model, we use the first 253K meshes for training and the rest 1K meshes for validation.
For the Flow model, we use all 254K meshes for training.
### Objaverse-XL
**Link**: https://objaverse.allenai.org/
**Data Collection Method**: Hybrid: Automatic, Synthetic
**Labeling Method by dataset**: N/A (no labels)
**Properties:** We use about 254k mesh data, which is a subset from the Objaverse-XL filtered by the number of parts.
## Inference
**Acceleration Engine**: PyTorch
**Test Hardware**: NVIDIA A100 (1 GPU configuration)
## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
ravi86/mood_detector | ravi86 | 2025-06-16T11:09:20Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"emotion-detection",
"facial-expressio",
"image-classification",
"deep-learning",
"cnn",
"en",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-14T05:37:49Z | ---
license: mit
language:
- en
metrics:
- accuracy
base_model:
- microsoft/resnet-50
new_version: google/vit-base-patch16-224
pipeline_tag: image-classification
library_name: transformers
tags:
- pytorch
- emotion-detection
- facial-expressio
- image-classification
- deep-learning
- cnn
---
# ๐ญ Face Expression Detector
A deep learning model that classifies facial expressions in grayscale images into one of seven core emotions. Designed for applications in **emotion analytics**, **human-computer interaction**, and **psychological research**.
---
## ๐ Model Overview
This model takes **48x48 grayscale face images** and classifies them into:
- ๐ Angry
- ๐คข Disgust
- ๐จ Fear
- ๐ Happy
- ๐ข Sad
- ๐ฒ Surprise
- ๐ Neutral
**Dataset**: [FER2013](https://www.kaggle.com/datasets/msambare/fer2013)
**Training Samples**: 28,709
**Testing Samples**: 3,589
---
## ๐ง Model Architecture
- ๐ฆ **Custom CNN**
- 3 Convolutional Layers
- Batch Normalization
- ReLU Activation
- Dropout for regularization
- ๐ Optimizer: `Adam`
- ๐ฅ Loss Function: `Categorical Crossentropy`
- โฑ๏ธ Epochs: `100`
---
## โ
Performance
> ๐ *Add your actual performance metrics here:*
- Accuracy on FER2013 Test Set: **~1.0%**
---
## ๐๏ธ Required Files
- `model.h5` โ Model Weights
- `config.json` โ Configuration file *(Transformers-based)*
- `preprocessor_config.json` โ Preprocessing setup *(if needed)*
- `requirements.txt` โ Python dependencies
---
## ๐ Use Cases
- ๐ฎ Real-time emotion feedback in games or virtual assistants
- ๐ Emotion analysis for psychological and behavioral studies
- ๐ฅ Enhancing video-based UX with dynamic emotion tracking
---
## โ ๏ธ Limitations
- Works best with **centered 48x48 grayscale faces**
- **Face detection (e.g., MTCNN)** required before prediction
- FER2013's demographic diversity is limited โ potential bias
---
## โ๏ธ Installation
Follow these steps to set up the environment and dependencies:
--pip install -r requirements.txt
torch>=1.9.0
transformers>=4.20.0
pillow>=8.0.0
### 1. Clone the Repository
git clone https://github.com/TRavi8688/Mood-Based-Music-Player
cd mood_detector
##๐งช How to Use (Transformers-based)
Follow these steps to preprocess an image and predict facial expression using the pre-trained Transformers-based model:
Python
```bash
from transformers import AutoModelForImageClassification, AutoImageProcessor
from PIL import Image
import torch
```
### 1. Load Model and Preprocessor
```bash
# STEP 1: Install dependencies
!pip install tensorflow pillow numpy
# STEP 2: Download model file using `requests`
import requests
model_url = "https://huggingface.co/ravi86/mood_detector/resolve/main/my_model.h5"
model_path = "my_model.h5"
# Download the file
response = requests.get(model_url)
with open(model_path, "wb") as f:
f.write(response.content)
print("โ
Model downloaded successfully!")
```
2. Load and Preprocess the Image
```bash
image_path = "your_image.jpg" # ๐ Replace with your image file
image = Image.open(image_path).convert("L").resize((48, 48)) # Convert to grayscale and resize
```
# 3. Make Predictions
```bash
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=-1) # Convert logits to probabilities
predicted_class = probs.argmax().item() # Get the predicted class index
```
# 4. Interpret the Result
```bash
emotions = ["Angry", "Disgust", "Fear", "Happy", "Sad", "Surprise", "Neutral"]
print(f"Predicted Emotion: {emotions[predicted_class]}")
```
โ๏ธ Deploy to Hugging Face Hub
```bash
Use these commands to prepare and push your model to the Hugging Face Hub:
Bash
# Step 1: Install & Login
pip install huggingface_hub
huggingface-cli login
from huggingface_hub import upload_folder
upload_folder(
folder_path="path/to/mood_detector",
repo_id="ravi86/mood_detector",
repo_type="model",
commit_message="๐ Upload mood detection model"
)
```
###
๐งญ Ethical Considerations
โ๏ธ Bias: The FER2013 dataset may exhibit biases in demographic representation. Exercise caution when interpreting results across diverse populations.
๐ Privacy: Ensure strict compliance with data privacy laws (e.g., GDPR, CCPA) when using this model on personal or sensitive images. Do not use without explicit consent.
โ Misuse: This model is not intended for unauthorized surveillance, profiling, or any other unethical applications.
###
๐ค Contact
๐ฌ For questions, support, or collaborations:
Hugging Face โ @ravi86
Gmailโ [email protected]
โญ If you find this project useful, consider giving a star or contributing! |
Sri2901/04_cosmos_v3_Shift | Sri2901 | 2025-06-16T11:08:08Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T11:07:39Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: cosmos
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
widget:
- text: A model poses confidently on a stone railing overlooking a lake with lush
green mountains in the background, wearing a white sleeveless dress and holding
a white quilted handbag. the model is standing in the middle of the image, with
her arms resting on the edge of the stone railing. she is wearing white sunglasses
and has a confident expression on her face. her long black hair cascades down
her back, and her eyes are directed straight ahead. her skin is smooth and fair,
and she has dark brown skin tones. the lake is calm and serene, with a few small
waves lapping against the shore, and the mountains are covered in lush green vegetation,
creating a picturesque backdrop. the lighting is bright and natural, highlighting
the model's beauty and elegance. the overall mood is sophisticated and timeless,
evoking a sense of luxury and sophistication.
output:
url: samples/1750069866114__000004000_2.jpg
- text: A woman standing outdoors in a marina with boats and mountains in the background.
she is wearing a white and blue floral print midi dress with spaghetti straps
and a slit at the side, paired with a woven straw bag. her long, dark brown hair
cascades down her back, and she is accessorized with a pair of gold earrings and
a round, gold-framed sunglasses. the sun is shining brightly, casting a warm glow
on her face and the surrounding vegetation. the woman stands confidently in the
middle of the image, with her hands resting on her hips, looking off into the
distance with a slight smile on her lips. the marina is filled with boats docked
in neat rows, and the hills in the distance provide a picturesque backdrop to
the scene.
output:
url: samples/1750069848760__000004000_1.jpg
- text: A photo shoot from a side angle about a woman in a black one-piece swimsuit
sitting by a pool with pebbles, wearing sunglasses. the image also shows water
droplets on her skin, highlighting her features. on the middle of the image, a
woman appears to be in her mid-twenties, with dark brown hair styled in loose
waves, wearing a black swimsuit with a plunging neckline and a side slit, sitting
on the edge of a swimming pool with her legs crossed. she has a sultry expression
and is looking directly at the camera. her body is slim and toned, with a slight
curve to her chest. she is wearing black sunglasses, which add a touch of sophistication
to her look. the background is a clear blue swimming pool, surrounded by pebble
stones. the lighting is bright and natural, casting shadows on her body. the overall
effect is one of relaxation and tranquility.
output:
url: samples/1750069831302__000004000_0.jpg
---
# 04_cosmos_v3_shift
Model trained with AI Toolkit by Ostris
<Gallery />
## Trigger words
You should use `cosmos` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/username/04_cosmos_v3_shift/tree/main) them in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('username/04_cosmos_v3_shift', weight_name='04_cosmos_v3_shift.safetensors')
image = pipeline('cosmos A model poses confidently on a stone railing overlooking a lake with lush green mountains in the background, wearing a white sleeveless dress and holding a white quilted handbag. the model is standing in the middle of the image, with her arms resting on the edge of the stone railing. she is wearing white sunglasses and has a confident expression on her face. her long black hair cascades down her back, and her eyes are directed straight ahead. her skin is smooth and fair, and she has dark brown skin tones. the lake is calm and serene, with a few small waves lapping against the shore, and the mountains are covered in lush green vegetation, creating a picturesque backdrop. the lighting is bright and natural, highlighting the model's beauty and elegance. the overall mood is sophisticated and timeless, evoking a sense of luxury and sophistication.').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Sri2901/01_Studio_v1_Shift | Sri2901 | 2025-06-16T11:05:20Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T11:04:51Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: studi@
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
widget:
- text: A woman standing confidently in the middle of a plain grey background, wearing
a blue and white floral print dress with puff sleeves and black ankle boots. she
is holding a small black clutch in her left hand and wearing black sunglasses.
her long black hair cascades down her back, framing her face perfectly. the dress
has a gathered waistline and a flowy skirt that falls just above her knees. the
woman is standing with her arms at her sides, looking directly at the camera with
a neutral expression. her black boots are ankle-length and reach up to her knees,
adding a touch of sophistication to her outfit. the background is simple and minimalistic,
allowing the focus to remain on the woman's figure and the subtle pattern of her
dress.
output:
url: samples/1750068339533__000004000_2.jpg
- text: A woman standing confidently in the middle of the image, wearing a black dress
with white piping and a blue handbag. She has short black hair styled in a sleek
manner and is wearing gold earrings. her posture is confident and strong, and
she is standing with one hand on her hip and the other resting on her thigh. the
background is a simple gradient of light blue to white, creating a minimalistic
look. the lighting is soft and even, highlighting her features and creating a
sense of sophistication. the image is shot from a full body angle, focusing on
the woman's full body and the handbag in her left hand.
output:
url: samples/1750068322435__000004000_1.jpg
- text: A photo-realistic shoot from a front camera angle about a woman in a white
sleeveless dress with a brown belt and a woven basket, standing confidently in
the middle of the image. the woman has long black hair and is wearing black flats.
she is standing in front of a plain grey background, with her hands behind her
back and a neutral expression on her face. her eyes are looking directly at the
viewer, and she appears to be in her mid-twenties. her body is slim and she is
wearing minimal clothing, except for a white dress that has a high collar and
a wide waist. her hair is styled in loose waves and falls down her back. her ears
are not visible, but she is also wearing minimal jewelry. she has a small basket
in her right hand, which is woven with a light brown color and has a gold buckle
around her waist. the image also shows a minimalistic and clean aesthetic, with
a focus on the woman's outfit and accessories.
output:
url: samples/1750068305344__000004000_0.jpg
---
# 01_Studio_v1_Shift
Model trained with AI Toolkit by Ostris
<Gallery />
## Trigger words
You should use `studi@` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/username/01_Studio_v1_Shift/tree/main) them in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('username/01_Studio_v1_Shift', weight_name='01_Studio_v1_Shift.safetensors')
image = pipeline('studi@ A woman standing confidently in the middle of a plain grey background, wearing a blue and white floral print dress with puff sleeves and black ankle boots. she is holding a small black clutch in her left hand and wearing black sunglasses. her long black hair cascades down her back, framing her face perfectly. the dress has a gathered waistline and a flowy skirt that falls just above her knees. the woman is standing with her arms at her sides, looking directly at the camera with a neutral expression. her black boots are ankle-length and reach up to her knees, adding a touch of sophistication to her outfit. the background is simple and minimalistic, allowing the focus to remain on the woman's figure and the subtle pattern of her dress.').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Sri2901/01_Studio_v1 | Sri2901 | 2025-06-16T11:03:59Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T11:03:32Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: studi@
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
widget:
- text: A photo-realistic shoot from a profile camera angle about a woman wearing
a blue and white floral print dress with puff sleeves and a tie waist, standing
in profile against a plain grey background. the image also shows a woman standing
in the middle of the frame, facing away from the camera, wearing a white dress
with blue floral patterns and a ruffled hem. on the right side of the image, a
woman appears to be in her mid-twenties, with long black hair styled in loose
waves, wearing earrings and a bracelet on her wrist. her expression is neutral,
and she is standing with her arms at her sides.
output:
url: samples/1750068830506__000004000_2.jpg
- text: A woman standing confidently in the middle of the image, wearing a black dress
with white piping and a blue handbag. She has short black hair styled in a sleek
manner and is wearing gold earrings. her posture is confident and strong, and
she is standing with one hand on her hip and the other resting on her thigh. the
background is a simple gradient of light blue to white, creating a minimalistic
look. the lighting is soft and even, highlighting her features and creating a
sense of sophistication. the image is shot from a full body angle, focusing on
the woman's full body and the handbag in her left hand.
output:
url: samples/1750068813061__000004000_1.jpg
- text: A woman standing confidently in the middle of a plain grey background, wearing
a blue and white floral print dress with puff sleeves and black ankle boots. she
is holding a small black clutch in her left hand and wearing black sunglasses.
her long black hair cascades down her back, framing her face perfectly. the dress
has a gathered waistline and a flowy skirt that falls just above her knees. the
woman is standing with her arms at her sides, looking directly at the camera with
a neutral expression. her black boots are ankle-length and reach up to her knees,
adding a touch of sophistication to her outfit. the background is simple and minimalistic,
allowing the focus to remain on the woman's figure and the subtle pattern of her
dress.
output:
url: samples/1750068795567__000004000_0.jpg
---
# 01_Studio_v1
Model trained with AI Toolkit by Ostris
<Gallery />
## Trigger words
You should use `studi@` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/username/01_Studio_v1/tree/main) them in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('username/01_Studio_v1', weight_name='01_Studio_v1.safetensors')
image = pipeline('studi@ A photo-realistic shoot from a profile camera angle about a woman wearing a blue and white floral print dress with puff sleeves and a tie waist, standing in profile against a plain grey background. the image also shows a woman standing in the middle of the frame, facing away from the camera, wearing a white dress with blue floral patterns and a ruffled hem. on the right side of the image, a woman appears to be in her mid-twenties, with long black hair styled in loose waves, wearing earrings and a bracelet on her wrist. her expression is neutral, and she is standing with her arms at her sides.').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_S-GGUF | Triangle104 | 2025-06-16T11:03:53Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T11:01:56Z | ---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-24B-A8B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
โ**Risk of Sensitive or Controversial Outputs**โ: This modelโs safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
โ**Not Suitable for All Audiences**:โ Due to limited content filtering, the modelโs
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
โ**Legal and Ethical Responsibilities**โ: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
โ**Research and Experimental Use**โ: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
โ**Monitoring and Review Recommendations**โ: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
โ**No Default Safety Guarantees**โ: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-24B-A8B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) for more details on the model.
---
Huihui-MoE-24B-A8B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Qwen3-8B-abliterated base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 4 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
This model combines four ablated models, and perhaps it can achieve the performance of all the ablated models?
This is just a test. The exploration of merging different manifestations of models of the same type is another possibility.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_S-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_S-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_S-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_S-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q4_k_s.gguf -c 2048
```
|
AinurSafr/telegram-message-classifier | AinurSafr | 2025-06-16T11:03:38Z | 5 | 0 | peft | [
"peft",
"safetensors",
"ru",
"en",
"arxiv:1910.09700",
"base_model:unsloth/OpenHermes-2.5-Mistral-7B-bnb-4bit",
"base_model:adapter:unsloth/OpenHermes-2.5-Mistral-7B-bnb-4bit",
"region:us"
] | null | 2025-05-10T02:40:48Z | ---
base_model: unsloth/OpenHermes-2.5-Mistral-7B-bnb-4bit
library_name: peft
language:
- ru
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
shelaz01/asl_alphabet_model | shelaz01 | 2025-06-16T11:03:26Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T11:01:35Z | ---
license: apache-2.0
---
|
LaaP-ai/donut-base-invoice-v1.04 | LaaP-ai | 2025-06-16T11:00:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-16T10:59:51Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: donut-base-invoice-v1.04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-invoice-v1.04
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.25_0.5_0.75_epoch1 | MinaMila | 2025-06-16T10:58:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T10:57:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_L-GGUF | Triangle104 | 2025-06-16T10:58:53Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T10:28:11Z | ---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-24B-A8B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
โ**Risk of Sensitive or Controversial Outputs**โ: This modelโs safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
โ**Not Suitable for All Audiences**:โ Due to limited content filtering, the modelโs
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
โ**Legal and Ethical Responsibilities**โ: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
โ**Research and Experimental Use**โ: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
โ**Monitoring and Review Recommendations**โ: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
โ**No Default Safety Guarantees**โ: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_L-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-24B-A8B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) for more details on the model.
---
Huihui-MoE-24B-A8B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Qwen3-8B-abliterated base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 4 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
This model combines four ablated models, and perhaps it can achieve the performance of all the ablated models?
This is just a test. The exploration of merging different manifestations of models of the same type is another possibility.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_L-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_l.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_L-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_l.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_L-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_l.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_L-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_l.gguf -c 2048
```
|
BigData-KSU/ArabVLM | BigData-KSU | 2025-06-16T10:58:22Z | 0 | 0 | peft | [
"peft",
"llava",
"ar",
"en",
"base_model:ALLaM-AI/ALLaM-7B-Instruct-preview",
"base_model:adapter:ALLaM-AI/ALLaM-7B-Instruct-preview",
"region:us"
] | null | 2025-05-08T06:53:41Z | ---
library_name: peft
language:
- ar
- en
base_model:
- ALLaM-AI/ALLaM-7B-Instruct-preview
---
## ArabVLM: Vision Language Model
- **Repository:** https://github.com/BigData-KSU/ArabVLM
- **Demo:** Soon.
## How to Get Started with the Model
### Install
1. Clone this repository and navigate to RS-LLaVA folder
```
git clone https://github.com/BigData-KSU/ArabVLM.git
cd ArabVLM
```
2. Install Packages
```
pip install -r requirements.txt
```
---
### Inference
Use the code below to get started with the model.
```python
from PIL import Image
import os
import torch
from vllm.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN
from vllm.conversation import conv_templates, SeparatorStyle
from vllm.model.builder import load_pretrained_model
from vllm.utils import disable_torch_init
from vllm.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria
### Main model....
model_path ='/BigData-KSU/ArabVLM'
model_base = 'ALLaM-AI/ALLaM-7B-Instruct-preview'
conv_mode = 'llava_llama_2'
disable_torch_init()
model_path = os.path.abspath(model_path)
print('model path')
print(model_path)
model_name = get_model_name_from_path(model_path)
print('model name')
print(model_name)
print('model base')
print(model_base)
tokenizer, model, processor, context_len = load_pretrained_model(model_path, model_base, model_name,device='cuda:0')
def chat_with_Vision_BioLLM(cur_prompt,image_name):
# Prepare the input text, adding image-related tokens if needed
image_mem = Image.open(image_name).convert('RGB')
image_processor = processor['image']
conv = conv_templates[conv_mode].copy()
roles = conv.roles
print(image_mem)
image_tensor = image_processor.preprocess(image_mem, return_tensors='pt')['pixel_values']
tensor = image_tensor.to(model.device, dtype=torch.float16)
print(f"{roles[1]}: {cur_prompt}")
cur_prompt = DEFAULT_IMAGE_TOKEN + '\n' + cur_prompt
conv.append_message(conv.roles[0], cur_prompt)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
keywords = [stop_str]
stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids)
if image_mem:
with torch.inference_mode():
output_ids = model.generate(
input_ids,
images=tensor,
do_sample=False,
max_new_tokens=1024,
use_cache=True,
stopping_criteria=[stopping_criteria])
response = tokenizer.decode(output_ids[0, input_ids.shape[1]:])
#print(outputs)
return response
if __name__ == "__main__":
cur_prompt='ูุตู ุงูุตูุฑุฉ ุจุงูุชูุตูู '
image_name='path/to/image'
outputs=chat_with_Vision_BioLLM(cur_prompt,image_name)
print('Model Response.....')
print(outputs)
```
- PEFT 0.4.0 |
Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_M-GGUF | Triangle104 | 2025-06-16T10:57:50Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T09:56:42Z | ---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-24B-A8B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
โ**Risk of Sensitive or Controversial Outputs**โ: This modelโs safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
โ**Not Suitable for All Audiences**:โ Due to limited content filtering, the modelโs
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
โ**Legal and Ethical Responsibilities**โ: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
โ**Research and Experimental Use**โ: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
โ**Monitoring and Review Recommendations**โ: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
โ**No Default Safety Guarantees**โ: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-24B-A8B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) for more details on the model.
---
Huihui-MoE-24B-A8B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Qwen3-8B-abliterated base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 4 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
This model combines four ablated models, and perhaps it can achieve the performance of all the ablated models?
This is just a test. The exploration of merging different manifestations of models of the same type is another possibility.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_m.gguf -c 2048
```
|
Khushi-Rao-Viral-Video-clip/Full-Viral.khushi.rao.Viral.Video.Tutorial.Official.Clip.new | Khushi-Rao-Viral-Video-clip | 2025-06-16T10:57:36Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T10:57:05Z | ---
license: apache-2.0
---
[](https://t.co/BILDe5gBcR) |
23ikram/model8bit | 23ikram | 2025-06-16T10:55:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:54:40Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** 23ikram
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
calcuis/openaudio-gguf | calcuis | 2025-06-16T10:55:26Z | 287 | 3 | null | [
"gguf",
"gguf-connector",
"text-to-speech",
"base_model:fishaudio/openaudio-s1-mini",
"base_model:quantized:fishaudio/openaudio-s1-mini",
"license:cc-by-nc-sa-4.0",
"region:us"
] | text-to-speech | 2025-06-13T10:39:22Z | ---
license: cc-by-nc-sa-4.0
base_model:
- fishaudio/openaudio-s1-mini
pipeline_tag: text-to-speech
tags:
- gguf-connector
---
## gguf quantized version of openaudio
- base model from [fishaudio](https://huggingface.co/fishaudio)
- text-to-speech synthesis
### **run it with gguf-connector**
```
ggc o2
```

| Prompt | Audio Sample |
|--------|---------------|
|`Hey Connector, why your appearance looks so stupid?`<br/>`Oh, really? maybe I ate too much smart beans.`<br/>`Wow. Amazing (laughing).`<br/>`Let's go to get some more smart beans and you will become stupid as well.`<br/> | ๐ง **audio-sample-1**<br><audio controls src="https://huggingface.co/calcuis/openaudio-gguf/resolve/main/samples%5Caudio1.wav"></audio> |
|`Suddenly the plane's engines began failing, and the pilot says there isn't much time, and he'll keep the plane in the air as long as he can, and told his two passengers to take the only two parachutes on board and bail out. The world's smartest man immediately took a parachute and said "I'm the world's smartest man! The world needs me, so I can't die here!", and then jumped out of the plane. The pilot tells the hippie to hurry up and take the other parachute, because there aren't any more. And the hippie says "Relax man. We'll be fine. The world's smartest man took my backpack."`<br/> | ๐ง **audio-sample-2**<br><audio controls src="https://huggingface.co/calcuis/openaudio-gguf/resolve/main/samples%5Caudio2.wav"></audio> |
### **review/reference**
- simply execute the command (`ggc o2`) above in console/terminal
- opt a `codec` and a `model` gguf in the current directory to interact with (see example below)
>
>GGUF file(s) available. Select which one for **codec**:
>
>1. codec-q2_k.gguf
>2. codec-q3_k_m.gguf
>3. codec-q4_k_m.gguf (recommended)
>4. codec-q5_k_m.gguf
>5. codec-q6_k.gguf
>6. model-bf16.gguf
>7. model-f16.gguf
>8. model-f32.gguf
>
>Enter your choice (1 to 8): 3
>
>GGUF file(s) available. Select which one for **model**:
>
>1. codec-q2_k.gguf
>2. codec-q3_k_m.gguf
>3. codec-q4_k_m.gguf
>4. codec-q5_k_m.gguf
>5. codec-q6_k.gguf
>6. model-bf16.gguf (recommended)
>7. model-f16.gguf (for non-cuda user)
>8. model-f32.gguf
>
>Enter your choice (1 to 8): _
>
- note: for the latest update, only tokenizer will be pulled to `models/fish` automatically during the first launch, you need to prepare the **codec** and **model** files yourself, working like [vision](https://huggingface.co/calcuis/llava-gguf) connector right away; mix and match, more flexible
- run it entirely offline; i.e., from local URL: http://127.0.0.1:7860 with lazy webui
- gguf-connector ([pypi](https://pypi.org/project/gguf-connector)) |
GhulamNabi/finetuned-Llama-3.2-3B | GhulamNabi | 2025-06-16T10:55:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:54:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hullolo/itn-classifier | hullolo | 2025-06-16T10:54:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:54:00Z | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hullolo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Khushi-Rao-Viral-Video-clip/FULL.VIDEO.khushi.Rao.Viral.Video.Clip | Khushi-Rao-Viral-Video-clip | 2025-06-16T10:53:26Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T10:52:45Z | ---
license: apache-2.0
---
[](https://t.co/BILDe5gBcR) |
aieng-lab/codet5p-770m_smell-doc | aieng-lab | 2025-06-16T10:53:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:Salesforce/codet5p-770m",
"base_model:finetune:Salesforce/codet5p-770m",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T10:52:52Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- Salesforce/codet5p-770m
pipeline_tag: text-classification
---
# CodeT5+ 770m for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [Salesforce/codet5p-770m](https://huggingface.co/Salesforce/codet5p-770m)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
aieng-lab/codet5p-220m_smell-doc | aieng-lab | 2025-06-16T10:52:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:Salesforce/codet5p-220m",
"base_model:finetune:Salesforce/codet5p-220m",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T10:51:54Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- Salesforce/codet5p-220m
pipeline_tag: text-classification
---
# CodeT5+ 220m for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [Salesforce/codet5p-220m](https://huggingface.co/Salesforce/codet5p-220m)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.25_0.75_0.05_epoch2 | MinaMila | 2025-06-16T10:50:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T10:49:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits