modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-03 00:49:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-03 00:44:12
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
LandCruiser/Tournai_4
|
LandCruiser
| 2025-02-28T19:13:29Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T19:04:27Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
vctmk/mantis-8b-idefics2-classification-tedEDself_4g_4096_regression
|
vctmk
| 2025-02-28T19:13:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"idefics2",
"text-classification",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"base_model:finetune:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-28T18:53:51Z |
---
library_name: transformers
license: apache-2.0
base_model: HuggingFaceM4/idefics2-8b
tags:
- generated_from_trainer
model-index:
- name: mantis-8b-idefics2-classification-tedEDself_4g_4096_regression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mantis-8b-idefics2-classification-tedEDself_4g_4096_regression
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 50.0
### Training results
### Framework versions
- Transformers 4.45.2
- Pytorch 2.6.0+cu124
- Datasets 2.18.0
- Tokenizers 0.20.3
|
MoBnJlal/dqn-SpaceInvadersNoFrameskip-v4
|
MoBnJlal
| 2025-02-28T19:13:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-02-28T19:12:51Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 700.50 +/- 214.47
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MoBnJlal -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MoBnJlal -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MoBnJlal
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Xavieress/modelX1
|
Xavieress
| 2025-02-28T19:12:51Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-28T19:12:51Z |
---
license: apache-2.0
---
|
Lily-Phillips-101-Challenge-Video-TVs/wATCH.Lily-Phillips-101-Challenge.video.original
|
Lily-Phillips-101-Challenge-Video-TVs
| 2025-02-28T19:10:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-28T19:09:39Z |
<a href="https://onlyurls.me/32413/?ngarang">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)๏ธ</a>
<a href="https://onlyurls.me/32413/?ngarang">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a>
<a href="https://onlyurls.me/32413/?ngarang" rel="nofollow"><img src="https://i.postimg.cc/gjM7d5zQ/trhth.gif" alt="image/png"></a>
|
tttx/models-3k-forced-p301-final-022825-step6
|
tttx
| 2025-02-28T19:09:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:tttx/3k-forced-p301-final-022825-step6-collated",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"license:mit",
"region:us"
] | null | 2025-02-28T18:53:13Z |
---
library_name: peft
license: mit
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
datasets:
- tttx/3k-forced-p301-final-022825-step6-collated
model-index:
- name: models-3k-forced-p301-final-022825-step6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# models-3k-forced-p301-final-022825-step6
This model is a fine-tuned version of [tttx/sft-32b-020925-19k-5ep](https://huggingface.co/tttx/sft-32b-020925-19k-5ep) on the tttx/3k-forced-p301-final-022825-step6-collated dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 486592
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.47.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
JacksonBrune/2c5bd041-3142-4883-8d64-97bba9a35328
|
JacksonBrune
| 2025-02-28T19:09:02Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
] | null | 2025-02-28T19:08:50Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: tiiuae/falcon-7b
model-index:
- name: JacksonBrune/2c5bd041-3142-4883-8d64-97bba9a35328
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# JacksonBrune/2c5bd041-3142-4883-8d64-97bba9a35328
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Zack-Z/llama31_8bi_CoTsft_rs3407_3_e1
|
Zack-Z
| 2025-02-28T19:08:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-28T18:46:06Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** Zack-Z
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
flutedev/whisper-subset
|
flutedev
| 2025-02-28T19:08:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-02-28T16:37:53Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-subset
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0628
- Wer: 2.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 400
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.8512 | 0.2976 | 50 | 0.5670 | 6.7775 |
| 0.1532 | 0.5952 | 100 | 0.1123 | 4.4757 |
| 0.1246 | 0.8929 | 150 | 0.0899 | 3.5521 |
| 0.0499 | 1.1905 | 200 | 0.0788 | 3.1117 |
| 0.0479 | 1.4881 | 250 | 0.0690 | 2.8417 |
| 0.0337 | 1.7857 | 300 | 0.0654 | 3.1401 |
| 0.0185 | 2.0833 | 350 | 0.0653 | 2.9270 |
| 0.0114 | 2.3810 | 400 | 0.0628 | 2.9270 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Shinichie/Mar1_wtaTEST4
|
Shinichie
| 2025-02-28T19:07:18Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T19:05:56Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Kokoutou/Verviers_10
|
Kokoutou
| 2025-02-28T19:07:00Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T18:52:09Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Kokoutou/Verviers_9
|
Kokoutou
| 2025-02-28T19:06:38Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T18:52:08Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Shinichie/Mar1_wtaTEST5
|
Shinichie
| 2025-02-28T19:06:10Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T19:04:57Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
youssefihab33/describe-the-product
|
youssefihab33
| 2025-02-28T19:05:32Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-28T19:05:32Z |
---
license: apache-2.0
---
|
aevalone/deepslothagent-Q8_0-GGUF
|
aevalone
| 2025-02-28T19:04:43Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:aevalone/deepslothagent",
"base_model:quantized:aevalone/deepslothagent",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-28T19:04:05Z |
---
base_model: aevalone/deepslothagent
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# aevalone/deepslothagent-Q8_0-GGUF
This model was converted to GGUF format from [`aevalone/deepslothagent`](https://huggingface.co/aevalone/deepslothagent) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/aevalone/deepslothagent) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo aevalone/deepslothagent-Q8_0-GGUF --hf-file deepslothagent-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo aevalone/deepslothagent-Q8_0-GGUF --hf-file deepslothagent-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo aevalone/deepslothagent-Q8_0-GGUF --hf-file deepslothagent-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo aevalone/deepslothagent-Q8_0-GGUF --hf-file deepslothagent-q8_0.gguf -c 2048
```
|
bonamt11/MentalLlama-3.2-3B-bnb-4bit
|
bonamt11
| 2025-02-28T19:03:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-28T19:02:58Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bonamt11
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
simonycl/llama-3.1-llama-70b-instruct
|
simonycl
| 2025-02-28T19:02:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-28T18:59:15Z |
---
library_name: transformers
license: other
base_model: meta-llama/Llama-3.1-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the llama-3.3-70b-ultrainteract dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.48.3
- Pytorch 2.4.0+cu118
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Kokoutou/Verviers_5
|
Kokoutou
| 2025-02-28T19:01:07Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T18:52:07Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Kokoutou/Verviers_4
|
Kokoutou
| 2025-02-28T19:00:48Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T18:52:06Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mhohenwald/markushohenwald
|
mhohenwald
| 2025-02-28T19:00:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:cc",
"region:us"
] |
text-to-image
| 2025-02-28T19:00:14Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: portrait of MARKUSHOHENWALD
output:
url: images/replicate-prediction-scs6y0h1rxrmc0cn95jv2h4psm.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: MARKUSHOHENWALD
license: cc
---
# markushohenwald
<Gallery />
## Model description
it's me
## Trigger words
You should use `MARKUSHOHENWALD` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/mhohenwald/markushohenwald/tree/main) them in the Files & versions tab.
|
Kokoutou/Verviers_3
|
Kokoutou
| 2025-02-28T19:00:13Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T18:52:06Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
thomlar24/estebangar
|
thomlar24
| 2025-02-28T18:59:54Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-02-28T17:41:26Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
allura-org/MS3-24B-Roselily-Creative
|
allura-org
| 2025-02-28T18:59:46Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:ToastyPigeon/ms3-roselily-instruct",
"base_model:finetune:ToastyPigeon/ms3-roselily-instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-23T03:34:29Z |
---
base_model:
- ToastyPigeon/ms3-roselily-instruct
library_name: transformers
tags:
- mergekit
- merge
---
# todo
make a model card and put a cute girl on it
# some info
Making this public so it can be tried and possibly merged if desired while I work on getting the energy to write a proper card.
Short list of things to know:
- This is a bunch of RP, story writing, etc. creative data applied to [ToastyPigeon/ms3-roselily-instruct](https://huggingface.co/ToastyPigeon/ms3-roselily-instruct).
- Instruct format: ChatML or Alpaca preferred, Tekken v7 possible
- ChatML tokens were assigned to unused tokens 20 and 21, this leaves all the tekken tokens intact so merges w/ tekken models are feasible
- Instruct-tuning phase did include Tekken v7 so the tokens are initialized and recognized, but I did not continue with it on the creative step because I do not like it for creative stuff (too restrictive with turn order)
- Feels a little less sensitive to samplers than Instruct-based MS3 models, but should probably still be used with conservative samplers
# chat templates
You may need to set `<|im_end|>` and/or `</s>` as stopping strings depending on which format you're using, the model generates both properly but tokenizers can be finicky about what they stop on by default
Alpaca w/ System
```
### System:
{system prompt}
### Instruction:
{user message}
### Response:
{model answer}</s>
```
ChatML
```
<|im_start|>system
{system prompt}<|im_end|>
<|im_start|>user
{user message}<|im_end|>
<|im_start|>assistant
{model answer}<|im_end|>
```
Also saw some completion training in chat mode and adventure mode.
|
WenFengg/25FEBBB4_O1K9
|
WenFengg
| 2025-02-28T18:58:57Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T18:52:35Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Deekila-Sherpa-Video-HD-TV/wATCH.Deekila-Sherpa.viral.video.original
|
Deekila-Sherpa-Video-HD-TV
| 2025-02-28T18:58:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-28T18:57:14Z |
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://lekedvideo.xyz/watch/?V=Deekila-Sherpa)
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)](https://lekedvideo.xyz/watch/?V=Deekila-Sherpa)
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://lekedvideo.xyz/watch/?V=Deekila-Sherpa)
|
simonycl/llama-3.1-qwen-70b-instruct
|
simonycl
| 2025-02-28T18:58:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-28T18:55:40Z |
---
library_name: transformers
license: other
base_model: meta-llama/Llama-3.1-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the qwen_2.5_70b_ultrainteract dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.48.3
- Pytorch 2.4.0+cu118
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Shinichie/Mar1_wtaDEV2
|
Shinichie
| 2025-02-28T18:58:16Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T18:57:05Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
WenFengg/25FEBBB3_V1K2
|
WenFengg
| 2025-02-28T18:58:06Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T18:56:52Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
WenFengg/25FEBBB1_V1K2
|
WenFengg
| 2025-02-28T18:57:28Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T18:56:18Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TongZheng1999/Qwen2.5-7B-Instruct-star-code-3Rounds-iter-2
|
TongZheng1999
| 2025-02-28T18:54:32Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-28T18:42:13Z |
---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: Qwen2.5-7B-Instruct-star-code-3Rounds-iter-2
tags:
- generated_from_trainer
- alignment-handbook
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-7B-Instruct-star-code-3Rounds-iter-2
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="TongZheng1999/Qwen2.5-7B-Instruct-star-code-3Rounds-iter-2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kidzheng/huggingface/runs/a4c32o80)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF
|
bartowski
| 2025-02-28T18:54:18Z | 0 | 2 | null |
[
"gguf",
"text-generation",
"base_model:qihoo360/TinyR1-32B-Preview",
"base_model:quantized:qihoo360/TinyR1-32B-Preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-02-28T16:52:26Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: qihoo360/TinyR1-32B-Preview
license: apache-2.0
---
## Llamacpp imatrix Quantizations of TinyR1-32B-Preview-v0.1 by qihoo360
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4792">b4792</a> for quantization.
Original model: https://huggingface.co/qihoo360/TinyR1-32B-Preview
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<๏ฝbeginโofโsentence๏ฝ>{system_prompt}<๏ฝUser๏ฝ>{prompt}<๏ฝAssistant๏ฝ><๏ฝendโofโsentence๏ฝ><๏ฝAssistant๏ฝ>
```
## What's new:
Tokenizer changes to fix repeating output from original, but results in quality loss
See notes on original model here: https://huggingface.co/qihoo360/TinyR1-32B-Preview#hotfix
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [TinyR1-32B-Preview-v0.1-Q8_0.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q8_0.gguf) | Q8_0 | 34.82GB | false | Extremely high quality, generally unneeded but max available quant. |
| [TinyR1-32B-Preview-v0.1-Q6_K_L.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q6_K_L.gguf) | Q6_K_L | 27.26GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [TinyR1-32B-Preview-v0.1-Q6_K.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q6_K.gguf) | Q6_K | 26.89GB | false | Very high quality, near perfect, *recommended*. |
| [TinyR1-32B-Preview-v0.1-Q5_K_L.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q5_K_L.gguf) | Q5_K_L | 23.74GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [TinyR1-32B-Preview-v0.1-Q5_K_M.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q5_K_M.gguf) | Q5_K_M | 23.26GB | false | High quality, *recommended*. |
| [TinyR1-32B-Preview-v0.1-Q5_K_S.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q5_K_S.gguf) | Q5_K_S | 22.64GB | false | High quality, *recommended*. |
| [TinyR1-32B-Preview-v0.1-Q4_1.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q4_1.gguf) | Q4_1 | 20.64GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [TinyR1-32B-Preview-v0.1-Q4_K_L.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q4_K_L.gguf) | Q4_K_L | 20.43GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [TinyR1-32B-Preview-v0.1-Q4_K_M.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q4_K_M.gguf) | Q4_K_M | 19.85GB | false | Good quality, default size for most use cases, *recommended*. |
| [TinyR1-32B-Preview-v0.1-Q4_K_S.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q4_K_S.gguf) | Q4_K_S | 18.78GB | false | Slightly lower quality with more space savings, *recommended*. |
| [TinyR1-32B-Preview-v0.1-Q4_0.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q4_0.gguf) | Q4_0 | 18.71GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [TinyR1-32B-Preview-v0.1-IQ4_NL.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-IQ4_NL.gguf) | IQ4_NL | 18.68GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [TinyR1-32B-Preview-v0.1-Q3_K_XL.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q3_K_XL.gguf) | Q3_K_XL | 17.93GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [TinyR1-32B-Preview-v0.1-IQ4_XS.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-IQ4_XS.gguf) | IQ4_XS | 17.69GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [TinyR1-32B-Preview-v0.1-Q3_K_L.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q3_K_L.gguf) | Q3_K_L | 17.25GB | false | Lower quality but usable, good for low RAM availability. |
| [TinyR1-32B-Preview-v0.1-Q3_K_M.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q3_K_M.gguf) | Q3_K_M | 15.94GB | false | Low quality. |
| [TinyR1-32B-Preview-v0.1-IQ3_M.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-IQ3_M.gguf) | IQ3_M | 14.81GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [TinyR1-32B-Preview-v0.1-Q3_K_S.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q3_K_S.gguf) | Q3_K_S | 14.39GB | false | Low quality, not recommended. |
| [TinyR1-32B-Preview-v0.1-IQ3_XS.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-IQ3_XS.gguf) | IQ3_XS | 13.71GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [TinyR1-32B-Preview-v0.1-Q2_K_L.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q2_K_L.gguf) | Q2_K_L | 13.07GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [TinyR1-32B-Preview-v0.1-IQ3_XXS.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-IQ3_XXS.gguf) | IQ3_XXS | 12.84GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [TinyR1-32B-Preview-v0.1-Q2_K.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-Q2_K.gguf) | Q2_K | 12.31GB | false | Very low quality but surprisingly usable. |
| [TinyR1-32B-Preview-v0.1-IQ2_M.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-IQ2_M.gguf) | IQ2_M | 11.26GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [TinyR1-32B-Preview-v0.1-IQ2_S.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-IQ2_S.gguf) | IQ2_S | 10.39GB | false | Low quality, uses SOTA techniques to be usable. |
| [TinyR1-32B-Preview-v0.1-IQ2_XS.gguf](https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF/blob/main/qihoo360_TinyR1-32B-Preview-v0.1-IQ2_XS.gguf) | IQ2_XS | 9.96GB | false | Low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF --include "qihoo360_TinyR1-32B-Preview-v0.1-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/qihoo360_TinyR1-32B-Preview-v0.1-GGUF --include "qihoo360_TinyR1-32B-Preview-v0.1-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (qihoo360_TinyR1-32B-Preview-v0.1-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ยฑ 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ยฑ 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ยฑ 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ยฑ 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ยฑ 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ยฑ 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ยฑ 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ยฑ 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ยฑ 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ยฑ 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ยฑ 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ยฑ 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ยฑ 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ยฑ 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ยฑ 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ยฑ 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ยฑ 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ยฑ 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
lesso01/6ee30fde-da2f-4ebb-b570-d1d34f272a8f
|
lesso01
| 2025-02-28T18:53:25Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:adapter:EleutherAI/pythia-160m",
"license:apache-2.0",
"region:us"
] | null | 2025-02-28T18:14:00Z |
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6ee30fde-da2f-4ebb-b570-d1d34f272a8f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: EleutherAI/pythia-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2a71e32b19bebc43_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2a71e32b19bebc43_train_data.json
type:
field_input: transcripts
field_instruction: image_url
field_output: caption
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso01/6ee30fde-da2f-4ebb-b570-d1d34f272a8f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000201
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/2a71e32b19bebc43_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 10
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 565c90ee-5085-44ce-a8a3-10804b2f6937
wandb_project: 01a
wandb_run: your_name
wandb_runid: 565c90ee-5085-44ce-a8a3-10804b2f6937
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6ee30fde-da2f-4ebb-b570-d1d34f272a8f
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000201
- train_batch_size: 4
- eval_batch_size: 4
- seed: 10
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 3.5011 |
| 6.3374 | 0.0006 | 50 | 3.2582 |
| 7.0695 | 0.0012 | 100 | 3.8099 |
| 8.3821 | 0.0018 | 150 | 4.1710 |
| 7.8302 | 0.0023 | 200 | 3.8861 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
generator-ai-tool/ai-porns-generator
|
generator-ai-tool
| 2025-02-28T18:53:00Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-02-28T18:52:45Z |
---
license: mit
---
# 7 Best AI Porn Generators Of 2025
The world of adult content has been revolutionized by artificial intelligence, with AI porn generators pushing the boundaries of realism and creativity. As we step into 2025, these tools have become more advanced, accessible, and controversial than ever. Whether you're curious about the technology or exploring its possibilities, weโve rounded up the 7 best AI porn generators of 2025โshowcasing the cutting-edge tools shaping this evolving industry.
## 1. Seduced.ai
### Why I Recommend Seduced.ai
Seduced.ai stands out as the best AI porn generator available today. It offers a unique blend of user-friendliness and extensive customization options, making it accessible for everyone, regardless of technical expertise. The platform allows users to explore their fantasies and create personalized content effortlessly.
โฉโฉโฉ[**Try Seduced.ai For Free**](https://sussap.net/h88f)

### Key Features
Extensive Fetish Support: Seduced.ai covers a wide range of fetishes, allowing users to generate content that caters to their specific desires.
Video Generation: Users can create short porn videos of up to 6 seconds, combining multiple sequences for a seamless experience.
Character Reusability: The platform allows users to save and reuse previously generated characters, enhancing creativity and continuity in content creation.
High-Quality Output: Seduced.ai provides options for upscaling images, ensuring that the generated content is not only unique but also visually appealing.
### My Experience
Using Seduced.ai has been a delightful experience. The interface is intuitive, making it easy to navigate through various options. I was able to generate high-quality images and videos quickly, which exceeded my expectations. The customization options allowed me to explore different scenarios and characters effortlessly.
### Pros
Easy to use, with no technical skills required.
Offers a vast array of extensions for unique content creation.
### Cons
Some features may require a subscription for full access.
โฉโฉโฉ[**Try Seduced.ai For Free**](https://sussap.net/h88f)
## 2. Pornx.ai
Pornx.ai is a revolutionary platform that allows users to create stunning AI-generated adult content tailored to their fantasies. With its user-friendly interface and advanced features, it stands out as the best AI porn generator available today. I highly recommend it for anyone looking to explore their creativity in a safe and imaginative environment.
โฉโฉโฉ[**Try Pornx.ai For Free**](https://sussap.net/9gfc)
### Why I Recommend It
Pornx.ai offers an unparalleled experience for users who wish to bring their fantasies to life. The platform's innovative tools and features make it easy to customize and generate unique content, ensuring that every user can create something truly special.
### Key Features
AI Image Generator: Create personalized images by selecting models, body types, and backgrounds.
Quality Mode: Enhance your images with options for Base, High, and Ultra quality settings.
Custom Pose: Transfer character poses from your images to generated content effortlessly.
In Paint: Modify specific areas of your images to achieve the desired look.
### My Experience
Using Pornx.ai has been an exciting journey. The intuitive design made it easy to navigate, and the results were impressive. I was able to create visuals that perfectly matched my imagination, making the experience both enjoyable and fulfilling.
### Pros
Extensive customization options allow for limitless creativity.
High-quality output enhances the overall visual experience.
### Cons
Some features may require a paid subscription for full access.
โฉโฉโฉ[**Try Pornx.ai For Free**](https://sussap.net/9gfc)
## 3. Porngen.art
PornGen.art is a revolutionary platform that utilizes advanced artificial intelligence to create highly realistic and customizable pornographic images. This AI porn generator allows users to bring their fantasies to life, whether it's a dream character or a specific scenario. With its user-friendly interface and powerful algorithms, PornGen.art stands out as one of the best options available in the market.
### Why I Recommend It
PornGen.art is not just about generating images; itโs about creating personalized experiences. The platform prioritizes user privacy and offers a variety of customization options, making it a top choice for those looking to explore their fantasies safely and creatively.
### Key Features
Realistic Image Generation: Utilizes deep learning algorithms to create lifelike images.
Customizable Options: Users can adjust body type, hair, ethnicity, and more to fit their desires.
Privacy Protection: All uploaded images are confidential and deleted within 48 hours.
Multiple Styles: Explore various genres, including hentai, anime, and furry art.
### My Experience
Using PornGen.art has been an exciting journey. The ease of uploading images and the speed of generation amazed me. The results were impressive, and I appreciated the level of customization available.
### Pros
High-quality, realistic images that cater to diverse preferences.
Strong emphasis on user privacy and data security.
### Cons
Results can vary significantly based on the quality of the uploaded images.
## 4. Pornjourney.ai
PornJourney.ai stands out as the best AI porn generator available today, offering users an unparalleled experience in creating customized adult content. I recommend it for its advanced technology, user-friendly interface, and commitment to privacy and security. The platform allows users to generate images that cater to their specific preferences, making it a favorite among enthusiasts.
### Key Features
Fast Generation: Dedicated server clusters ensure quick image creation for premium users.
'Keep This Girl' Feature: Retain and modify the features of your favorite AI-generated characters.
Image Library: Save images and their metadata for easy access and modifications.
Privacy Protection: All images are encrypted, ensuring user data remains secure and private.
### My Experience
Using PornJourney.ai has been a delightful experience. The image generation process is seamless, and the results are incredibly realistic. I appreciate the variety of customization options available, allowing me to create characters that truly match my preferences.
### Pros
Exceptional realism and detail in generated images.
Regular updates with new features and content every weekend.
### Cons
AI porn videos are still in beta, which may lead to occasional instability.
## 5. Pornjoy.ai
PornJoy.ai stands out as the premier AI porn generator, offering users an innovative platform to create and customize adult content effortlessly. I recommend it for its user-friendly interface and extensive customization options that cater to a wide range of fantasies.
### Why I Recommend It
PornJoy.ai provides a unique blend of creativity and privacy, allowing users to explore their desires in a safe environment. The platform's advanced AI technology ensures high-quality images that truly reflect individual preferences.
### Key Features
AI Porn Generator: Create personalized porn images by selecting body types, skin tones, hairstyles, and outfits.
AI Porn Chat: Engage in steamy conversations with customizable AI characters, enhancing the interactive experience.
AI Hentai Generator: Quickly generate unique hentai images tailored to your specific desires.
Undress AI Generator: Transform dressed images into AI nudes, allowing for creative modifications and adjustments.
### My Experience
Using PornJoy.ai has been a delightful experience. The intuitive design made it easy to navigate, and the variety of customization options allowed me to create images that perfectly matched my fantasies.
### Pros
High-quality, realistic AI-generated images.
Strong emphasis on user privacy and data protection.
### Cons
Some features may require a learning curve for new users.
## 6. Pornpen.ai
### Why I Recommend It
I recommend Pornpen.ai for its ability to generate high-quality, personalized adult content that caters to diverse tastes. The user-friendly interface and impressive customization options make it accessible for everyone, regardless of their experience level.
### Key Features
Customizable Content: Users can specify their preferences, ensuring the generated content aligns with their desires.
High-Quality Graphics: The platform produces visually appealing images and videos that enhance the overall experience.
Privacy Protection: Pornpen.ai prioritizes user privacy, ensuring that all interactions remain confidential.
Regular Updates: The platform frequently updates its algorithms to improve content quality and user experience.
### My Experience
My experience with Pornpen.ai has been overwhelmingly positive. The platform is easy to navigate, and I was impressed by the quality of the generated content. The customization options allowed me to explore various themes, making it a fun and engaging experience.
### Pros
Innovative Technology: The AI behind Pornpen.ai is cutting-edge, producing unique content that is hard to find elsewhere.
User-Friendly Interface: The platform is designed for ease of use, making it accessible for all users.
### Cons
One downside is that the generated content may not always meet expectations, as it relies on algorithms that can sometimes produce unexpected results.
## 7. Candy.ai
### Why I Recommend It
Candy.ai is highly recommended for its ability to blend intimacy, creativity, and personalization. Users can explore various fantasies and customize their AI girlfriend to meet their desires, ensuring a fulfilling experience.
### Key Features
Customizable AI Girlfriend: Users can design their girlfriend's body type, personality, and clothing, creating a truly unique companion.
Interactive Experience: The AI girlfriend listens, responds quickly, and can even follow photo requests, making interactions feel genuine.
Privacy and Security: Candy.ai prioritizes user privacy with state-of-the-art secure data storage, ensuring all interactions remain confidential.
Endless Possibilities: Users can explore various scenarios, from romantic chats to intense AI sexting, catering to all preferences.
### My Experience
Using Candy.ai has been an enjoyable journey. The customization options allowed me to create a girlfriend that truly resonates with my desires. The interactions felt real, and I appreciated the privacy measures in place.
### Pros
Highly customizable experience tailored to individual preferences.
Strong emphasis on user privacy and data security.
### Cons
Some users may find the AI's responses occasionally lack depth.
## Frequently Asked Questions (FAQS)
### 1. What is AI porn?
AI porn refers to adult content created or enhanced using artificial intelligence technologies. This can include generating realistic images, videos, or deepfakes of individuals, often without their consent. AI porn leverages machine learning algorithms to manipulate or create explicit content that can appear highly authentic.
### 2. How does AI porn work?
AI porn typically relies on deep learning techniques, such as Generative Adversarial Networks (GANs) or diffusion models. These algorithms are trained on large datasets of images and videos to learn patterns and generate new content. For example:
Deepfakes: AI swaps faces in existing videos to make it appear as though someone is performing in a pornographic video.
Image generation: AI creates entirely synthetic images or videos of people who may not exist.
Enhancement: AI improves the quality of existing content, making it more realistic.
### 3. Can AI porn generators create realistic content?
Yes, AI porn generators can create highly realistic content. Advances in AI technology, particularly with GANs and diffusion models, have made it possible to produce images and videos that are nearly indistinguishable from real footage. However, the quality depends on the sophistication of the AI model and the data it was trained on.
### 4. Are there ethical and privacy concerns regarding AI porn?
Yes, AI porn raises significant ethical and privacy concerns:
Non-consensual content: Many AI porn creations involve using someone's likeness without their permission, which is a violation of privacy and consent.
Misuse and exploitation: AI porn can be used for harassment, revenge porn, or blackmail, causing emotional and psychological harm to victims.
Legal gray areas: Laws around AI-generated explicit content are still evolving, making it difficult to regulate or hold perpetrators accountable.
Impact on society: The proliferation of AI porn could normalize non-consensual content and contribute to the objectification of individuals.
|
lesso08/1e22a97c-fe0e-47d2-a8c7-3bcf1b038725
|
lesso08
| 2025-02-28T18:52:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:adapter:EleutherAI/pythia-160m",
"license:apache-2.0",
"region:us"
] | null | 2025-02-28T18:13:16Z |
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1e22a97c-fe0e-47d2-a8c7-3bcf1b038725
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: EleutherAI/pythia-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2a71e32b19bebc43_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2a71e32b19bebc43_train_data.json
type:
field_input: transcripts
field_instruction: image_url
field_output: caption
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso08/1e22a97c-fe0e-47d2-a8c7-3bcf1b038725
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000208
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/2a71e32b19bebc43_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 80
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 565c90ee-5085-44ce-a8a3-10804b2f6937
wandb_project: 08a
wandb_run: your_name
wandb_runid: 565c90ee-5085-44ce-a8a3-10804b2f6937
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1e22a97c-fe0e-47d2-a8c7-3bcf1b038725
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000208
- train_batch_size: 4
- eval_batch_size: 4
- seed: 80
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 3.5007 |
| 6.6 | 0.0006 | 50 | 3.2678 |
| 6.6666 | 0.0012 | 100 | 3.3604 |
| 6.6271 | 0.0018 | 150 | 3.3342 |
| 6.7633 | 0.0023 | 200 | 3.3959 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
KingEmpire/Wavre_11
|
KingEmpire
| 2025-02-28T18:52:07Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T18:22:15Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
KingEmpire/Wavre_4
|
KingEmpire
| 2025-02-28T18:51:46Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T18:22:12Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TFOCUS/Inference-gadgets-maxium_1
|
TFOCUS
| 2025-02-28T18:51:15Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T18:27:19Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
KingEmpire/Wavre_6
|
KingEmpire
| 2025-02-28T18:50:47Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T18:22:13Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Dimba777/q-Taxi-v3
|
Dimba777
| 2025-02-28T18:49:15Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-02-28T18:49:11Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.77
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Dimba777/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
apps-ai-top/ai-nude-generator
|
apps-ai-top
| 2025-02-28T18:49:08Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-28T18:48:18Z |
---
license: apache-2.0
---
# 5 Best AI Nude Generators
The best AI nude generator has features like realistic & accurate result generation, and customization options (like age, body type, pose, etc), faster rendering speed, privacy, and security.
I have tried more than 100 tools in the field of undress, deep nude, or AI nude. I chose these 5 tools that follow all the criteria I mentioned above.
## 1. Undress.app
Undress.app is recognized as one of the best AI nude generators available online. Utilizing advanced artificial intelligence technology, it allows users to create unclothed images quickly and efficiently.
The platform is user-friendly, ensuring that even those unfamiliar with such tools can navigate it with ease. With a commitment to user privacy and data security, Undress.app stands out as a trustworthy option for generating NSFW content.
โฉโฉโฉ[**Try Undress App For Free**](https://bestaitools.top/fgRB)

### **Key Features**
Multiple AI Modes: Users can choose from various undressing modes, including Lingerie, Bikini, and NSFW mode, allowing for a personalized experience.
High-Quality Results: The AI processes images to deliver high-quality results, ensuring that the generated images are clear and detailed.
Free Trial Access: New users can sign up and receive free credits to explore the app's features without any financial commitment.
Privacy Assurance: Undress.app does not save any user data, ensuring that all actions remain confidential and secure.
Compatibility: The app works with both male and female photos, as well as anime images, providing a wide range of customization options.
User-Friendly Interface: The platform is designed to be intuitive, making it easy for users to upload images and generate results quickly.
Regular Updates: The developers frequently update the app to improve functionality and security, ensuring a safe user experience.
### **My Experience**
Using Undress.app was a straightforward and enjoyable experience. After signing up, I was greeted with a clean and intuitive interface that made navigation a breeze.
I selected the bikini mode and uploaded a photo I was allowed to use. Within seconds, the AI processed the image and delivered a high-quality result without any blurriness.
I appreciated the variety of modes available, which allowed me to experiment with different styles. The privacy features gave me peace of mind, knowing that my data was secure and not stored anywhere.
Overall, my experience was positive, and I found the tool to be effective and user-friendly.
### **Pros:**
Easy to use with a user-friendly interface.
High-quality image generation with no blur.
Multiple modes for diverse customization.
Strong privacy and security measures in place.
Free trial credits are available for new users.
Works with various types of images, including anime.
### **Cons:**
Sign-up is required, which may deter some users.
Free credits may be limited, requiring users to purchase more for extensive use.
Results can vary based on the quality of the uploaded image.
โฉโฉโฉ[**Try Undress App For Free**](https://bestaitools.top/fgRB)
## 2. Pornx.ai
Pornx.ai is revolutionizing the world of adult content with its cutting-edge AI nude generator. This innovative platform allows users to create stunning, personalized adult images and videos that cater to their unique fantasies.
With a user-friendly interface and a plethora of customization options, Pornx.ai empowers users to unleash their creativity and explore their desires in a safe and imaginative environment.
โฉโฉโฉ[**Try For Free**](https://bestaitools.top/fgRB)
### **Key Features**
AI Image Generator: Generate your own AI porn images by selecting models, including women, men, or transgender individuals. Customize with various filters, body types, skin tones, hairstyles, outfits, and backgrounds.
AI Video Generator: Craft personalized videos that reflect your imagination, allowing for a more immersive experience.
Quality Mode: Enhance your images with the "Quality" feature, which zooms in on details and increases resolution for a top-notch visual experience.
Custom Pose: Transfer character poses from your uploaded images to the generated images, making storytelling and personal pleasure more engaging.
In Paint Feature: Modify specific areas of your images by selecting and editing them, allowing for tailored adjustments and enhancements.
Community Engagement: Join the Pornx.ai Discord community to connect with other users, share experiences, and gain insights into the platform.
Age Verification: The platform ensures that all users are of legal adult age, maintaining a safe environment for mature content.
Free and Paid Plans: While the basic features are available for free, users can upgrade to a paid plan for additional benefits and enhanced functionalities.
### **My Experience**
Using Pornx.ai has been an exhilarating journey. The intuitive interface made it easy to navigate through the various features. I was particularly impressed with the AI Image Generator, which allowed me to create images that closely matched my vision.
The customization options were extensive, enabling me to experiment with different models and styles. The Quality Mode truly elevated the visual appeal of my creations, making them look professional and polished. Overall, my experience was enjoyable and fulfilling, as I could explore my creativity without limitations.
### **Pros**
User-Friendly Interface: Easy to navigate, even for beginners.
Extensive Customization: A wide range of options for personalizing images and videos.
High-Quality Output: The Quality Mode enhances the visual appeal significantly.
Community Support: Engaging with other users through Discord fosters a sense of belonging.
Free Access: Basic features are available at no cost, making it accessible to everyone.
### **Cons:**
Age Restrictions: Users must be over 18, which may limit access for younger audiences.
Paid Features: Some advanced functionalities require a subscription, which may not be ideal for all users.
Content Limitations: The platform is designed for adult content, which may not appeal to everyone.
โฉโฉโฉ[**Try For Free**](https://bestaitools.top/fgRB)
## 3. Seduced.ai
Seduced.ai is recognized as one of the leading AI nude generators available today. This innovative platform allows users to create stunning and unique NSFW images and videos effortlessly, without requiring any technical skills.
With a wide array of features and customizable options, Seduced.ai caters to various preferences and fetishes, making it a go-to choice for those looking to explore their fantasies in a safe and private environment.
โฉโฉโฉ[**Try For Free**](https://bestaitools.top/fgRB)
### **Key Features**
Easy-to-Use Interface: The platform is designed for users of all skill levels, allowing anyone to generate content with just a few clicks.
Video Generation: Users can create smooth porn videos of up to 6 seconds, combining multiple sequences for a seamless experience.
Mixable Extensions: Seduced.ai allows users to mix up to 8 extensions, enabling the creation of unique images that cannot be found elsewhere.
Character Reuse: Users can save previously generated characters for reuse in future creations, allowing for infinite scenarios.
Diverse AI Models: The platform offers a selection of 10 distinct AI models, allowing users to create both realistic and anime-style content.
Upscaling Options: Users can enhance the resolution of generated images two or three times, adding finer details for a more realistic appearance.
Privacy Control: Users have the option to keep their generated images and videos private, ensuring discretion.
Fetish Support: Seduced.ai covers a wide range of fetishes, providing extensions that empower users to produce content beyond typical capabilities.
### **My Experience**
Using Seduced.ai has been a remarkable experience. The user-friendly interface made it easy for me to navigate through the various features. I was particularly impressed by the extensive library of extensions available, which allowed me to mix and match different elements to create unique images.
The ability to generate videos was an added bonus, and I found the quality to be surprisingly high for such a short duration. The option to reuse characters made it easy to create a storyline, enhancing the overall experience.
### **Pros:**
User-Friendly: No technical skills are required to generate content.
High-Quality Output: The images and videos produced are of excellent quality.
Wide Range of Options: Extensive library of extensions and AI models to choose from.
Privacy Features: Users can keep their creations private.
Regular Updates: The platform frequently adds new features and extensions.
### **Cons:**
Subscription Costs: Some users may find the pricing plans to be on the higher side.
Limited Video Duration: The maximum video length of 6 seconds may not be sufficient for all users.
Content Restrictions: While the platform supports various fetishes, some niche interests may not be fully covered.
โฉโฉโฉ[**Try For Free**](https://bestaitools.top/fgRB)
## 4. Undress.cc
Undress.cc is recognized as one of the best AI nude generators available today. This innovative platform utilizes advanced artificial intelligence technology to create realistic images of women without clothing.
Designed to be user-friendly and accessible, Undress.cc allows users to explore their fantasies in a safe and private environment. With its intuitive interface and various features, it has gained popularity among users looking for creative ways to generate undressed images.
โฉโฉโฉ[**Try For Free**](https://bestaitools.top/fgRB)
### **Key Features**
Free Access: Undress.cc offers a free AI undressing tool, allowing users to generate images without any initial cost.
User-Friendly Interface: The platform is designed to be intuitive, making it easy for anyone to navigate and utilize its features effectively.
Multiple Modes: Users can choose from different modes, such as 'X-Ray Mode' for deep nude undressing or 'Lingerie Mode' to explore various fantasies.
Privacy and Security: The app prioritizes user security and confidentiality, ensuring that all generated images and user data remain private.
Registration Benefits: Upon signing up, users receive free credits to explore the service, including the deep nude functionality.
Legal Compliance: Undress.cc operates within the bounds of current legal frameworks, ensuring that its services are legitimate and lawful.
Creative Exploration: The tool provides a unique way to experiment with undressing images while respecting user preferences.
Continuous Updates: The platform is regularly updated to enhance user experience and incorporate the latest advancements in AI technology.
### **My Experience**
Using Undress.cc was a straightforward and enjoyable experience. After registering on the platform, I was greeted with a clean and intuitive interface that made navigation easy. Uploading a clear image was simple, and I was impressed by the variety of modes available.
I decided to try the 'X-Ray Mode' and was amazed at the realism of the generated images. The process was quick, and I appreciated the privacy measures in place, which made me feel secure while using the app. Overall, my experience with Undress.cc was positive, and I found it to be a valuable tool for creative exploration.
### **Pros:**
Free access to basic features.
Intuitive and user-friendly interface.
Multiple modes for different preferences.
Strong emphasis on user privacy and security.
Legal and compliant with current regulations.
### **Cons:**
Some advanced features may require purchasing credits.
Limited to images of women, which may not appeal to all users.
Potential ethical concerns regarding the use of generated images.
โฉโฉโฉ[**Try For Free**](https://bestaitools.top/fgRB)
## 5. Undressai.tools
Undressai.tools is a cutting-edge AI nude generator that utilizes advanced technologies to transform clothed images into realistic nude visuals.
Leveraging deep learning algorithms and sophisticated image processing techniques, this tool offers users a unique and innovative way to explore the artistic potential of AI-generated imagery.
โฉโฉโฉ[**Try For Free**](https://bestaitools.top/fgRB)
### **Key Features**
Stable Diffusion: This model enhances image generation by producing high-quality, coherent outputs with minimal artifacts, significantly improving realism and detail in the undressed images.
Generative Adversarial Networks (GANs): GANs power Undressai.tools by utilizing two neural networks to generate highly realistic images of nudity, ensuring lifelike results.
Deep Learning Models: Sophisticated algorithms analyze clothing patterns and body structures to accurately create undressed versions of subjects, enhancing the overall quality of the output.
Image Synthesis: AI-driven image synthesis generates realistic skin textures that replace removed clothing, ensuring that the final images appear natural and believable.
Pose Estimation: Machine learning algorithms track and predict body poses, ensuring anatomically accurate undressing outcomes that respect the original image's context.
Convolutional Neural Networks (CNNs): CNNs extract key features from input images to guide the undressing process, improving output quality and detail.
Computer Vision and Image Recognition: These techniques identify and isolate clothing areas, allowing for precise removal and replacement, which is crucial for achieving realistic results.
Style Transfer: Advanced algorithms ensure that the generated nude images match the original's lighting, shading, and artistic style, maintaining the integrity of the original photograph.
### **My Experience**
Using Undressai.tools has been an intriguing experience. The interface is intuitive, making it easy to upload images and select the areas to modify. I was impressed by the speed at which the tool processed the images and the quality of the results.
The generated nude visuals were remarkably realistic, capturing the essence of the original images while effectively removing clothing. The ability to adjust and refine the output further enhanced my experience, allowing for creative experimentation.
### **Pros:**
User-Friendly Interface: The platform is easy to navigate, making it accessible for users of all skill levels.
High-Quality Outputs: The generated images are realistic and detailed, thanks to advanced AI technologies.
Privacy Focused: All generated images are auto-deleted within 48 hours, ensuring user privacy and data security.
Versatile Applications: The tool can be used for various purposes, including artistic exploration and personal projects.
### **Cons:**
Ethical Considerations: Users must be mindful of the ethical implications of generating nude images, particularly regarding consent and privacy.
Limited Image Formats: The tool currently supports only specific file formats (.jpg, .png, .heic), which may restrict some users.
Potential Misuse: There is a risk of the technology being misused for inappropriate purposes, necessitating responsible usage guidelines.
โฉโฉโฉ[**Try For Free**](https://bestaitools.top/fgRB)
## Frequently Asked Questions (FAQS)
### **1. What is AI Nude?**
AI Nude refers to various applications and tools that utilize artificial intelligence to create altered images, specifically by generating realistic nude versions of clothed individuals. These technologies often employ deep learning techniques and generative algorithms, enabling users to manipulate and alter visual content. However, their use has raised significant privacy and ethical concerns, particularly regarding consent and the potential for misuse.
### **2. How Does AI Nude Work?**
AI Nude applications typically use Generative Adversarial Networks (GANs), which consist of two neural networks: a generator that creates images and a discriminator that evaluates their realism. The following steps explain how AI Nude works:
Data Collection: Large datasets of images train the networks to understand realistic image formation.
Training Process: The generator produces images while the discriminator assesses them, providing feedback for refinement.
Iterative Improvement: Over multiple cycles, the generator enhances its capability to create realistic images, ultimately producing the final output.
### **3. What are the Applications of AI Nude Generator?**
AI Nude generators can be used for various applications, including:
Artistic Exploration: Artists may use AI nude tools to create digital art or explore the representation of human forms.
Marketing: Certain businesses might utilize AI to produce provocative content for advertising.
Cyber Harassment: Unfortunately, these tools are also misused for creating non-consensual images leading to harassment or blackmail.
It is crucial to note that while the technology has creative potential, its applications need to be approached with caution due to ethical and legal implications.
### **4. Is there privacy and ethical concerns regarding AI Nude?**
Yes, there are significant privacy and ethical concerns surrounding AI Nude technologies. Here are some key issues:
Lack of Consent: AI nude generators create images without the subject's permission, violating privacy rights.
Potential for Misuse: Generated images can be used for harassment, blackmail, or revenge, causing emotional and psychological harm.
Legal Gaps: Current laws often inadequately address the nuances of digital image manipulation, complicating legal enforcement.
Impact on Mental Health: Victims of non-consensual image manipulation may experience anxiety, depression, and damage to their personal and professional reputations.
|
KingEmpire/Wavre_3
|
KingEmpire
| 2025-02-28T18:48:59Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T18:22:12Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
PrunaAI/CohereForAI-c4ai-command-r7b-arabic-02-2025-GGUF-smashed
|
PrunaAI
| 2025-02-28T18:48:51Z | 0 | 0 | null |
[
"gguf",
"pruna-ai",
"base_model:CohereForAI/c4ai-command-r7b-arabic-02-2025",
"base_model:quantized:CohereForAI/c4ai-command-r7b-arabic-02-2025",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-28T15:07:51Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: CohereForAI/c4ai-command-r7b-arabic-02-2025
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/vb6SmA3hxu)
## This repo contains GGUF versions of the CohereForAI/c4ai-command-r7b-arabic-02-2025 model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: CohereForAI-c4ai-command-r7b-arabic-02-2025-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download CohereForAI-c4ai-command-r7b-arabic-02-2025-GGUF-smashed c4ai-command-r7b-arabic-02-2025.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download CohereForAI-c4ai-command-r7b-arabic-02-2025-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download CohereForAI-c4ai-command-r7b-arabic-02-2025-GGUF-smashed c4ai-command-r7b-arabic-02-2025.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m c4ai-command-r7b-arabic-02-2025.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {{prompt\}} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./c4ai-command-r7b-arabic-02-2025.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {{prompt}} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./c4ai-command-r7b-arabic-02-2025.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{{"role": "system", "content": "You are a story writing assistant."}},
{{
"role": "user",
"content": "Write a story about llamas."
}}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
ljafsdlfd/q-FrozenLake-v1-4x4-noSlippery
|
ljafsdlfd
| 2025-02-28T18:48:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-02-28T18:48:04Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ljafsdlfd/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Vision-CAIR/LongVU_Llama3_2_1B
|
Vision-CAIR
| 2025-02-28T18:47:59Z | 75 | 10 | null |
[
"pytorch",
"cambrian_llama",
"video-text-to-text",
"arxiv:2410.17434",
"license:apache-2.0",
"region:us"
] |
video-text-to-text
| 2024-10-23T17:55:22Z |
---
tags:
- video-text-to-text
license: apache-2.0
---
# Citation
```
@article{shen2024longvu,
title={LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding},
author={Shen, Xiaoqian and Xiong, Yunyang and Zhao, Changsheng and Wu, Lemeng and Chen, Jun and Zhu, Chenchen and Liu, Zechun and Xiao, Fanyi and Varadarajan, Balakrishnan and Bordes, Florian and Liu, Zhuang and Xu, Hu and J. Kim, Hyunwoo and Soran, Bilge and Krishnamoorthi, Raghuraman and Elhoseiny, Mohamed and Chandra, Vikas},
journal={arXiv:2410.17434},
year={2024}
}
```
|
bunnycore/Qwen2.5-3B-Model-Stock-v4.1
|
bunnycore
| 2025-02-28T18:47:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:merge:Qwen/Qwen2.5-3B-Instruct",
"base_model:bunnycore/QwQen-3B-LCoT",
"base_model:merge:bunnycore/QwQen-3B-LCoT",
"base_model:bunnycore/Qwen-2.5-3b-R1-lora_model-v.1",
"base_model:merge:bunnycore/Qwen-2.5-3b-R1-lora_model-v.1",
"base_model:bunnycore/Qwen-2.5-s1k-R1-lora-v1.1",
"base_model:merge:bunnycore/Qwen-2.5-s1k-R1-lora-v1.1",
"base_model:bunnycore/Qwen2.5-3B-Model-Stock",
"base_model:merge:bunnycore/Qwen2.5-3B-Model-Stock",
"base_model:bunnycore/Qwen2.5-3B-Model-Stock-v3.1",
"base_model:merge:bunnycore/Qwen2.5-3B-Model-Stock-v3.1",
"base_model:bunnycore/Qwen2.5-3B-RP-Thinker-V2",
"base_model:merge:bunnycore/Qwen2.5-3B-RP-Thinker-V2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-28T18:43:48Z |
---
base_model:
- bunnycore/Qwen2.5-3B-RP-Thinker-V2
- bunnycore/Qwen-2.5-s1k-R1-lora-v1.1
- Qwen/Qwen2.5-3B-Instruct
- bunnycore/Qwen2.5-3B-Model-Stock
- bunnycore/Qwen2.5-3B-Model-Stock-v3.1
- bunnycore/Qwen-2.5-3b-R1-lora_model-v.1
- bunnycore/QwQen-3B-LCoT
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [bunnycore/Qwen2.5-3B-RP-Thinker-V2](https://huggingface.co/bunnycore/Qwen2.5-3B-RP-Thinker-V2) + [bunnycore/Qwen-2.5-s1k-R1-lora-v1.1](https://huggingface.co/bunnycore/Qwen-2.5-s1k-R1-lora-v1.1)
* [bunnycore/Qwen2.5-3B-Model-Stock](https://huggingface.co/bunnycore/Qwen2.5-3B-Model-Stock)
* [bunnycore/Qwen2.5-3B-Model-Stock-v3.1](https://huggingface.co/bunnycore/Qwen2.5-3B-Model-Stock-v3.1) + [bunnycore/Qwen-2.5-3b-R1-lora_model-v.1](https://huggingface.co/bunnycore/Qwen-2.5-3b-R1-lora_model-v.1)
* [bunnycore/QwQen-3B-LCoT](https://huggingface.co/bunnycore/QwQen-3B-LCoT)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: bunnycore/Qwen2.5-3B-Model-Stock
parameters:
weight: 0.5
- model: bunnycore/QwQen-3B-LCoT
- model: bunnycore/Qwen2.5-3B-Model-Stock-v3.1+bunnycore/Qwen-2.5-3b-R1-lora_model-v.1
- model: bunnycore/Qwen2.5-3B-RP-Thinker-V2+bunnycore/Qwen-2.5-s1k-R1-lora-v1.1
base_model: Qwen/Qwen2.5-3B-Instruct
merge_method: model_stock
parameters:
dtype: bfloat16
tokenizer_source: Qwen/Qwen2.5-3B-Instruct
```
|
Lily-Phillips-101-Challenge-Video-4K/FULL.Lily.Phillips.101.Challenge.Video.Viral.Video.On.Social.Media.X
|
Lily-Phillips-101-Challenge-Video-4K
| 2025-02-28T18:47:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-28T18:47:12Z |
<!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://japantvshow.com/viral-video/?v=Lily+Phillips">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a rel="nofollow" href="https://japantvshow.com/viral-video/?v=Lily+Phillips">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p>
<p><a rel="nofollow" href="https://japantvshow.com/viral-video/?v=Lily+Phillips"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div>
|
Vision-CAIR/LongVU_Qwen2_7B
|
Vision-CAIR
| 2025-02-28T18:46:41Z | 421 | 69 | null |
[
"safetensors",
"cambrian_qwen",
"video-text-to-text",
"dataset:shenxq/OneVision",
"dataset:shenxq/VideoChat2",
"arxiv:2410.17434",
"base_model:Vision-CAIR/LongVU_Qwen2_7B_img",
"base_model:finetune:Vision-CAIR/LongVU_Qwen2_7B_img",
"license:apache-2.0",
"model-index",
"region:us"
] |
video-text-to-text
| 2024-10-18T05:04:32Z |
---
datasets:
- shenxq/OneVision
- shenxq/VideoChat2
base_model:
- Vision-CAIR/LongVU_Qwen2_7B_img
pipeline_tag: video-text-to-text
model-index:
- name: llava-onevision-qwen-7b-ov
results:
- task:
type: multimodal
dataset:
name: EgoSchema
type: egoschema
metrics:
- type: accuracy
value: 67.6
name: accuracy
verified: true
- task:
type: multimodal
dataset:
name: MLVU
type: mlvu
metrics:
- type: accuracy
value: 65.4
name: accuracy
verified: true
- task:
type: multimodal
dataset:
name: MVBench
type: mvbench
metrics:
- type: accuracy
value: 66.9
name: accuracy
verified: true
- task:
type: multimodal
dataset:
name: VideoMME
type: videomme
metrics:
- type: accuracy
value: 60.6
name: accuracy
verified: true
license: apache-2.0
---
# LongVU
This repository contains the model based on Qwen2-7B as presented in [LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding](https://huggingface.co/papers/2410.17434).
Play with the model on the [HF demo](https://huggingface.co/spaces/Vision-CAIR/LongVU).
<div align="left">
<a href='https://vision-cair.github.io/LongVU'><img src="https://longvu.s3.amazonaws.com/assets/demo.gif" alt="Demo GIF" style="width: 100%; max-width: 650px;"></a>
</div>
# Use
We provide the simple generation process for using our model. For more details, you could refer to [Github](https://github.com/Vision-CAIR/LongVU)
```python
# git clone https://github.com/Vision-CAIR/LongVU
import numpy as np
import torch
from longvu.builder import load_pretrained_model
from longvu.constants import (
DEFAULT_IMAGE_TOKEN,
IMAGE_TOKEN_INDEX,
)
from longvu.conversation import conv_templates, SeparatorStyle
from longvu.mm_datautils import (
KeywordsStoppingCriteria,
process_images,
tokenizer_image_token,
)
from decord import cpu, VideoReader
tokenizer, model, image_processor, context_len = load_pretrained_model(
"./checkpoints/longvu_qwen", None, "cambrian_qwen",
)
model.eval()
video_path = "./examples/video1.mp4"
qs = "Describe this video in detail"
vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
fps = float(vr.get_avg_fps())
frame_indices = np.array([i for i in range(0, len(vr), round(fps),)])
video = []
for frame_index in frame_indices:
img = vr[frame_index].asnumpy()
video.append(img)
video = np.stack(video)
image_sizes = [video[0].shape[:2]]
video = process_images(video, image_processor, model.config)
video = [item.unsqueeze(0) for item in video]
qs = DEFAULT_IMAGE_TOKEN + "\n" + qs
conv = conv_templates["qwen"].copy()
conv.append_message(conv.roles[0], qs)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(model.device)
stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
keywords = [stop_str]
stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids)
with torch.inference_mode():
output_ids = model.generate(
input_ids,
images=video,
image_sizes=image_sizes,
do_sample=False,
temperature=0.2,
max_new_tokens=128,
use_cache=True,
stopping_criteria=[stopping_criteria],
)
pred = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
```
# Citation
```
@article{shen2024longvu,
title={LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding},
author={Shen, Xiaoqian and Xiong, Yunyang and Zhao, Changsheng and Wu, Lemeng and Chen, Jun and Zhu, Chenchen and Liu, Zechun and Xiao, Fanyi and Varadarajan, Balakrishnan and Bordes, Florian and Liu, Zhuang and Xu, Hu and J. Kim, Hyunwoo and Soran, Bilge and Krishnamoorthi, Raghuraman and Elhoseiny, Mohamed and Chandra, Vikas},
journal={arXiv:2410.17434},
year={2024}
}
```
|
mradermacher/Qwen2.5-7B-Medicine-i1-GGUF
|
mradermacher
| 2025-02-28T18:45:30Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"medical",
"zh",
"base_model:WangCa/Qwen2.5-7B-Medicine",
"base_model:quantized:WangCa/Qwen2.5-7B-Medicine",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-28T16:59:11Z |
---
base_model: WangCa/Qwen2.5-7B-Medicine
language:
- zh
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- medical
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/WangCa/Qwen2.5-7B-Medicine
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF/resolve/main/Qwen2.5-7B-Medicine.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Shinichie/Mar1_wtaTEST3
|
Shinichie
| 2025-02-28T18:44:22Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T18:43:09Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Sapna-Shah-Leaks-Videos/Sapna-Shah-Leaks-Video
|
Sapna-Shah-Leaks-Videos
| 2025-02-28T18:43:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-28T18:38:00Z |
<!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://japantvshow.com/viral-video/?v=Sapna+Shah">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a rel="nofollow" href="https://japantvshow.com/viral-video/?v=Sapna+Shah">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p>
<p><a rel="nofollow" href="https://japantvshow.com/viral-video/?v=Sapna+Shah"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div>
|
Shinichie/Mar1_wtaTEST2
|
Shinichie
| 2025-02-28T18:43:09Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-28T18:41:56Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Mattia2700/mt5-large_AllDataSources_0.0002_constant_512_flattening
|
Mattia2700
| 2025-02-28T18:42:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-02-28T12:14:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
artisanalwasp/sdxl-base-1.0-fbadataset5e-4-lrwrmp0-ep15-withpadding-noflip-lora-2
|
artisanalwasp
| 2025-02-28T18:42:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-02-28T18:12:07Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - artisanalwasp/sdxl-base-1.0-fbadataset5e-4-lrwrmp0-ep15-withpadding-noflip-lora-2
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the artisanalwasp/resized_fba_with_letterbox_wo_wearscores2_train dataset. You can find some example images in the following.



LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
mradermacher/Qwen2.5-7B-Medicine-GGUF
|
mradermacher
| 2025-02-28T18:41:39Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"medical",
"zh",
"base_model:WangCa/Qwen2.5-7B-Medicine",
"base_model:quantized:WangCa/Qwen2.5-7B-Medicine",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-28T14:42:07Z |
---
base_model: WangCa/Qwen2.5-7B-Medicine
language:
- zh
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- medical
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/WangCa/Qwen2.5-7B-Medicine
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-GGUF/resolve/main/Qwen2.5-7B-Medicine.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-GGUF/resolve/main/Qwen2.5-7B-Medicine.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-GGUF/resolve/main/Qwen2.5-7B-Medicine.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-GGUF/resolve/main/Qwen2.5-7B-Medicine.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-GGUF/resolve/main/Qwen2.5-7B-Medicine.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-GGUF/resolve/main/Qwen2.5-7B-Medicine.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-GGUF/resolve/main/Qwen2.5-7B-Medicine.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-GGUF/resolve/main/Qwen2.5-7B-Medicine.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-GGUF/resolve/main/Qwen2.5-7B-Medicine.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-GGUF/resolve/main/Qwen2.5-7B-Medicine.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-GGUF/resolve/main/Qwen2.5-7B-Medicine.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Medicine-GGUF/resolve/main/Qwen2.5-7B-Medicine.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
apple/DFN-public
|
apple
| 2025-02-28T18:41:02Z | 1,238 | 1 |
transformers
|
[
"transformers",
"pytorch",
"clip",
"zero-shot-image-classification",
"arxiv:2309.17425",
"license:apple-amlr",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2024-07-08T11:27:27Z |
---
license: apple-amlr
license_name: apple-sample-code-license
license_link: LICENSE
---
A CLIP (Contrastive Language-Image Pre-training) ViT-B/32 model trained on Conceptual Captions 12M, Conceptual Captions 3M, and Shutterstock 15M.
Data Filtering Networks (DFNs) are small networks used to automatically filter large pools of uncurated data.
This model is a DFN trained on publicly available data.
This model has been converted to PyTorch from the original JAX checkpoints from Axlearn (https://github.com/apple/axlearn).
## Model Details
- **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
- **Dataset:** CC12M + CC3M + SS15M
- **Papers:**
- Data Filtering Networks: https://arxiv.org/abs/2309.17425
- **Examples Seen:** 1.28B
## Citation
```bibtex
@article{fang2023data,
title={Data Filtering Networks},
author={Fang, Alex and Jose, Albin Madappally and Jain, Amit and Schmidt, Ludwig and Toshev, Alexander and Shankar, Vaishaal},
journal={arXiv preprint arXiv:2309.17425},
year={2023}
}
```
|
ai-apps-superb/best-deepnude-ai-apps
|
ai-apps-superb
| 2025-02-28T18:39:43Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-02-28T18:38:53Z |
---
license: mit
---
# 5 Best Deepnude AI Apps Of 2025
The 5 deepnude apps that produce realistic and accurate results are mentioned below. These tools are secure, fast, easy to use and offers lots of customization options and enticing features.
## 1. Undress.app
Undress.app stands out as one of the best deepnude AI apps available today. This user-friendly platform allows users to generate high-quality images quickly and safely, making it a popular choice for those exploring the capabilities of AI in image manipulation.
โฉโฉโฉ[**Try Undress App For Free**](https://bestaitools.top/fgRB)

### **Key Features**
User-Friendly Interface: Undress.app boasts an intuitive design that makes it easy for users of all skill levels to navigate and utilize its features.
Multiple Generation Modes: The app offers various undressing modes, including Lingerie, Bikini, and NSFW, allowing users to experiment with different styles.
High-Quality Results: The AI is trained on thousands of images, ensuring that the generated results are as realistic and clear as possible.
Privacy and Security: Undress.app prioritizes user confidentiality, ensuring that no data is saved or published, providing a safe experience.
Free Trial Credits: New users can sign up and receive free credits to explore the app's features without any financial commitment.
Compatibility: The app works with both male and female photos, as well as anime images, offering a wide range of customization options.
Regular Updates: The developers frequently update the app to improve functionality and security, ensuring a reliable user experience.
### **My Experience**
Using Undress.app was a seamless experience from start to finish. After signing up, I was greeted with a clean interface that made navigation straightforward. I tested the app by uploading a photo and selecting the NSFW mode.
The AI processed the image quickly, and within seconds, I received a high-quality result that exceeded my expectations. The level of detail and realism was impressive, showcasing the app's advanced technology. Additionally, I appreciated the privacy measures in place, which made me feel secure while using the platform.
### **Pros:**
Easy to use with a straightforward interface.
Offers a variety of undressing modes for customization.
Generates high-quality, realistic images.
Prioritizes user privacy and data security.
Free trial credits available for new users.
Compatible with various types of images, including anime.
Referral program to earn additional credits.
Regular updates enhance functionality and security.
### **Cons:**
Sign-up is required, which may deter some users.
Results can vary based on the quality of the uploaded image.
Some features may require a paid subscription for full access.
โฉโฉโฉ[**Try Undress App For Free**](https://bestaitools.top/fgRB)
## 2. Porngen.art
Porngen.art stands out as a leading platform for creating AI-generated adult content. With the rise of deepnude AI applications, users can now explore their fantasies in a highly customizable and realistic manner.
โฉโฉโฉ[**Try For Free**](https://bestaitools.top/fgRB)
### **Key Features**
User-Friendly Interface: The platform is designed to be intuitive, allowing users to navigate easily and create content without technical expertise.
High-Resolution Image Generation: Users can generate stunning, high-quality images that meet their specific desires, ensuring a visually appealing experience.
Customizable Character Creation: The AI generator allows users to design characters based on various parameters such as body type, age, hair, and ethnicity.
Diverse Styles: Explore a variety of styles, including hentai, anime, and furry, catering to different tastes and preferences.
Multiple Generation Modes: Users can choose from different modes, such as lingerie, bondage, or explicit scenes, to tailor their creations to their liking.
Privacy and Security: Porngen.art prioritizes user privacy, ensuring that all uploaded images and generated content are kept confidential and deleted within 48 hours.
Free and Premium Options: The platform offers both free and premium plans, allowing users to explore features without financial commitment while providing enhanced capabilities for paying members.
Community Gallery: Users can browse a gallery of examples to get inspired and see the potential of the AI generator in action.
### **My Experience**
Using Porngen.art has been a fascinating journey. The registration process was straightforward, and I quickly gained access to the platform. I was impressed by the variety of customization options available. I uploaded my own images and experimented with different styles and features.
The AI's ability to generate realistic images was astonishing, and I found myself lost in the creative process. The community gallery provided ample inspiration, and I appreciated the ability to see what others had created.
### **Pros:**
Highly Realistic Images: The AI generates images that are incredibly detailed and lifelike.
Extensive Customization: Users can tailor their creations to fit their specific fantasies.
Privacy Assurance: The platform takes user privacy seriously, ensuring confidentiality.
Variety of Styles: The ability to explore different genres keeps the experience fresh and exciting.
### **Cons:**
Learning Curve: While the interface is user-friendly, some features may require time to master.
Variable Results: The quality of generated images can depend heavily on the input images used.
Ethical Concerns: The use of AI in adult content raises questions about consent and the potential for misuse.
Subscription Costs: While there are free options, premium features may come at a cost that some users might find prohibitive.
โฉโฉโฉ[**Try For Free**](https://bestaitools.top/fgRB)
## 3. Pornx.ai
Pornx.ai is a revolutionary platform that allows users to explore their fantasies through the power of AI-generated adult content. With a focus on creativity and customization, this deepnude AI app offers a unique experience for those looking to create personalized visuals. Whether you want to generate images or videos, Pornx.ai provides the tools to bring your imagination to life.
โฉโฉโฉ[**Try For Free**](https://bestaitools.top/fgRB)
### **Key Features**
AI Image Generator: Create your own AI porn images by selecting models, including women, men, or transgender individuals. Customize with various filters, body types, skin tones, hairstyles, outfits, and backgrounds.
AI Video Generator: This cutting-edge tool allows users to craft personalized videos that reflect their imagination, making the creative process seamless and enjoyable.
Quality Mode: Elevate your images with the "Quality" feature, which enhances details and resolution. Choose from Base, High, or Ultra quality levels to transform your fantasies into stunning visuals.
Custom Pose: Transfer character poses from your uploaded images to generated visuals effortlessly. This feature is designed for storytelling or personal pleasure, especially for "Gold" users in Private mode.
In Paint: Tailor your images by modifying specific areas. This feature allows you to tweak details or introduce new elements, giving you complete control over your creations.
Community Engagement: Join the vibrant Discord community to connect with other users, share experiences, and gain inspiration for your creations.
Age Verification: The platform ensures that all users are of legal adult age, maintaining a safe environment for mature content.
Support and Help: Access a dedicated support team for any inquiries or assistance needed while using the platform.
### **My Experience**
Using Pornx.ai has been an exhilarating journey. The user interface is intuitive, making it easy to navigate through the various features. I particularly enjoyed the AI Image Generator, where I could experiment with different models and customize them to match my vision.
The Quality Mode truly enhances the final output, providing crisp and detailed images that exceeded my expectations. The Custom Pose feature was a game-changer, allowing me to create dynamic scenes that felt alive and engaging. Overall, my experience was filled with creativity and satisfaction.
### **Pros:**
Highly Customizable: Users can create unique content tailored to their preferences.
Advanced Features: Tools like Quality Mode and Custom Pose enhance the creative process.
Community Support: Engaging with a community of like-minded individuals adds value to the experience.
Safe Environment: Age verification ensures that the platform is used responsibly.
### **Cons:**
Learning Curve: New users may take some time to fully understand all features.
Subscription Costs: Some advanced features may require a paid plan, which could be a barrier for some users.
Content Limitations: As with any AI-generated content, there may be limitations in realism and variety.
โฉโฉโฉ[**Try For Free**](https://bestaitools.top/fgRB)
## 4. Seduced.ai
Seduced.ai is a leading platform in the realm of AI-generated adult content, particularly known for its deepnude capabilities. This innovative application allows users to create unique and personalized adult images and videos with ease.
โฉโฉโฉ[**Try For Free**](https://bestaitools.top/fgRB)
### **Key Features**
Video Generation: Seduced.ai enables users to generate smooth porn videos of up to 6 seconds, providing a dynamic experience.
Unique Results: Users can mix up to 8 extensions to create images that are truly one-of-a-kind, ensuring that no two creations are alike.
Character Reuse: The platform allows for the saving and reuse of previously generated characters, enabling them to appear in various scenarios.
Diverse Content Creation: Users can choose from a range of 10 distinct AI models to create either realistic or anime-style content.
Fetish Extensions: Seduced.ai offers a wide array of extensions that cater to various fetishes, expanding the creative possibilities for users.
Upscaling Options: Users can enhance the resolution of generated images, adding finer details for a more realistic appearance.
No Technical Skills Required: The platform is designed for ease of use, allowing anyone to create adult content without needing technical expertise.
Privacy Options: Users have the option to keep their generated images and videos private, ensuring discretion and confidentiality.
### **My Experience**
Using Seduced.ai has been a remarkable experience. The interface is intuitive, making it easy to navigate through the various features. I was particularly impressed by the ability to mix different extensions, which allowed me to create unique and personalized content.
The video generation feature was a highlight, as it provided a dynamic aspect to my creations. Additionally, the option to reuse characters made it convenient to develop ongoing narratives in my content. Overall, Seduced.ai has proven to be a powerful tool for anyone interested in exploring AI-generated adult content.
### **Pros:**
User-Friendly: The platform is accessible to users of all skill levels.
Variety of Content: Offers a wide range of models and extensions for diverse content creation.
High-Quality Output: The generated images and videos are of impressive quality.
Privacy Features: Users can choose to keep their creations private.
### **Cons:**
Subscription Costs: Some users may find the pricing plans to be on the higher side.
Limited Video Length: The maximum video length of 6 seconds may not be sufficient for all users.
Content Restrictions: While the platform supports various fetishes, some users may find certain limitations in content generation.
โฉโฉโฉ[**Try For Free**](https://bestaitools.top/fgRB)
## 5. Soulgen.net
Soulgen.net is a cutting-edge platform that harnesses the power of artificial intelligence to create stunning images from text prompts. Among the best deepnude AI apps available,
Soulgen stands out for its user-friendly interface and innovative features that allow users to bring their creative visions to life. Whether you want to create a unique character, edit existing images, or explore endless possibilities, Soulgen has something to offer for everyone.
โฉโฉโฉ[**Try For Free**](https://bestaitools.top/fgRB)
### **Key Features**
AI Magic Tool from Text: Generate images from simple text prompts in mere seconds, making creativity accessible to all.
Create Your Dream Character: Soulgen allows users to describe their ideal character, transforming words into visual art effortlessly.
Portrait Creation: Upload a reference photo and let the AI create a character that resembles someone you know, adding a personal touch to your creations.
Edit Your Images: Enhance your images by adding, extending, or removing content using straightforward text prompts, activating your creative superpowers.
AI Outpainting: Expand your images beyond their original boundaries by resizing and adding new elements like backgrounds and characters.
Unique Image Generation: Each image created is unique, based on your specific descriptions, ensuring that your creations stand out.
Commercial Use: Users can utilize their created art for commercial purposes, provided they create the art themselves.
No Copyright Issues: Since Soulgen generates images that do not exist, users do not have to worry about copyright concerns.
### **My Experience**
Using Soulgen.net has been an exhilarating experience. The platform's intuitive design makes it easy to navigate, even for those who may not be tech-savvy. I was able to log in quickly and start creating right away.
The process of generating images is seamless; I simply entered a description of what I wanted, clicked "Create," and within seconds, I had a stunning image that matched my vision. The ability to upload reference photos for character creation added a layer of personalization that I found particularly enjoyable. Overall, my experience with Soulgen has been positive, and I appreciate the creative freedom it offers.
### **Pros:**
User-Friendly Interface: Easy to navigate, making it accessible for all users.
Fast Image Generation: Create images in seconds, saving time and effort.
Unique Creations: Each image is tailored to your specific descriptions, ensuring uniqueness.
Commercial Use Allowed: Flexibility to use created images for business purposes.
### **Cons:**
Dependence on Text Prompts: The quality of the output heavily relies on the clarity of the input description.
Limited Customization: While editing is possible, some users may find the options somewhat limited compared to traditional graphic design tools.
โฉโฉโฉ[**Try For Free**](https://bestaitools.top/fgRB)
## Frequently Asked Questions (FAQS)
### **1. What is Deepnude AI?**
Deepnude AI is a controversial software that uses artificial intelligence and deep learning algorithms to create realistic nude images from clothed photos. Developed by an anonymous creator, it gained notoriety for promoting non-consensual image manipulation, raising ethical and legal concerns.
### **2. How Does Deepnude AI Work?**
The software uses Generative Adversarial Networks (GANs), which involve two neural networksโa generator and a discriminatorโthat work together to produce high-quality images by training on a dataset of images.
### **3. What are the Applications of Deepnude AI?**
Here are the main applications of Deepnude AI:
Digital Art and Illustration
Deepnude AI can be utilized to create unique pieces in digital art, enabling artists to experiment with realistic nudity in their artworks while transforming traditional images into creative interpretations.
Adult Entertainment
The technology is prominently featured in the adult entertainment industry, allowing creators to generate realistic nude images quickly. This application has created significant ethical and legal discussions regarding consent and privacy.
Personal Use for Artistic Exploration
Some individuals use Deepnude AI for personal projects or explorations of body positivity, creating artistic representations of themselves or expressing their creative visions in a private setting.
Deepfake Technology Development
Deepnude AI contributes to research and advancements in deepfake technology, helping developers understand the implications and capabilities of AI-generated imagery, especially in the context of ethical usage and policy-making.
Photography Enhancement
It can be applied to enhance or edit photographs in a creative way, allowing photographers to push the boundaries of traditional photography techniques and create striking visuals.
### **4. What were the Factors Contributing to the Blurring of DeepNude Images?**
Lack of advanced AI algorithms.
Insufficient training data.
Limited computational resources.
Over-reliance on pre-trained models.
Lack of manual editing capabilities.
Inadequate image processing techniques.
Limited control over image parameters.
### **5. What are the Ethical and Legal Considerations When Using DeepNude AI?**
The main ethical concerns include consent, as the technology can create nude images of individuals without their permission, which can lead to harassment and emotional distress. Legally, many jurisdictions have laws against non-consensual explicit content, which poses risks for users.
### **6. How Can I Improve the Quality of My DeepNude Images?**
To enhance the quality of DeepNude images, it is essential to:
Use high-resolution images for input.
Adjust available settings for better output quality.
Ensure good lighting and clarity when capturing images.
### **7. What are Some Tips for Creating High-Quality DeepNude Pics Without Blur?**
Utilize applications specifically designed to avoid blurring.
Regularly update and use the latest algorithms.
Experiment with different tools to find the best output.
|
apple/DFN2B-CLIP-ViT-B-16
|
apple
| 2025-02-28T18:39:34Z | 13,014 | 8 |
open_clip
|
[
"open_clip",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] | null | 2023-10-31T03:52:33Z |
---
license: apple-amlr
license_name: apple-sample-code-license
license_link: LICENSE
---
A CLIP (Contrastive Language-Image Pre-training) model trained on DFN-2B.
Data Filtering Networks (DFNs) are small networks used to automatically filter large pools of uncurated data.
This model was trained on 2B images that were filtered from a pool of 12.8B uncurated image-text pairs
(12.8B image-text pairs from CommonPool-12.8B).
These weights are directly usable in OpenCLIP (image + text).
## Model Details
- **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
- **Dataset:** DFN-2b
- **Papers:**
- Data Filtering Networks: https://arxiv.org/abs/2309.17425
- **Examples Seen:** 12.8B
## Model Metrics
| Dataset | Metric |
|:-----------------------|---------:|
| ImageNet 1k | 0.76236 |
| Caltech-101 | 0.942894 |
| CIFAR-10 | 0.9672 |
| CIFAR-100 | 0.8347 |
| CLEVR Counts | 0.232333 |
| CLEVR Distance | 0.245267 |
| Country211 | 0.19545 |
| Describable Textures | 0.575532 |
| EuroSAT | 0.54 |
| FGVC Aircraft | 0.248503 |
| Food-101 | 0.91303 |
| GTSRB | 0.469913 |
| ImageNet Sketch | 0.620684 |
| ImageNet v2 | 0.682 |
| ImageNet-A | 0.482133 |
| ImageNet-O | 0.493 |
| ImageNet-R | 0.830967 |
| KITTI Vehicle Distance | 0.192686 |
| MNIST | 0.782 |
| ObjectNet | 0.631851 |
| Oxford Flowers-102 | 0.819895 |
| Oxford-IIIT Pet | 0.936907 |
| Pascal VOC 2007 | 0.788528 |
| PatchCamelyon | 0.521545 |
| Rendered SST2 | 0.486546 |
| RESISC45 | 0.61381 |
| Stanford Cars | 0.90735 |
| STL-10 | 0.97525 |
| SUN397 | 0.714162 |
| SVHN | 0.598955 |
| Flickr | 0.7728 |
| MSCOCO | 0.518773 |
| WinoGAViL | 0.541748 |
| iWildCam | 0.155574 |
| Camelyon17 | 0.499283 |
| FMoW | 0.141149 |
| Dollar Street | 0.625 |
| GeoDE | 0.891023 |
| **Average** | **0.609232** |
## Model Usage
### With OpenCLIP
```
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:apple/DFN2B-CLIP-ViT-B-16')
tokenizer = get_tokenizer('ViT-B-16')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features = F.normalize(image_features, dim=-1)
text_features = F.normalize(text_features, dim=-1)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
```
## Citation
```bibtex
@article{fang2023data,
title={Data Filtering Networks},
author={Fang, Alex and Jose, Albin Madappally and Jain, Amit and Schmidt, Ludwig and Toshev, Alexander and Shankar, Vaishaal},
journal={arXiv preprint arXiv:2309.17425},
year={2023}
}
```
|
apple/DFN2B-CLIP-ViT-L-14
|
apple
| 2025-02-28T18:39:33Z | 12,585 | 14 |
open_clip
|
[
"open_clip",
"pytorch",
"clip",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] | null | 2023-10-30T23:07:24Z |
---
license: apple-amlr
license_name: apple-sample-code-license
license_link: LICENSE
---
A CLIP (Contrastive Language-Image Pre-training) model trained on DFN-2B.
Data Filtering Networks (DFNs) are small networks used to automatically filter large pools of uncurated data.
This model was trained on 2B images that were filtered from a pool of 12.8B uncurated image-text pairs
(12.8B image-text pairs from CommonPool-12.8B).
This model has been converted to PyTorch from the original JAX checkpoints from Axlearn (https://github.com/apple/axlearn).
These weights are directly usable in OpenCLIP (image + text).
## Model Details
- **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
- **Dataset:** DFN-2b
- **Papers:**
- Data Filtering Networks: https://arxiv.org/abs/2309.17425
- **Examples Seen:** 12.8B
## Model Metrics
| Eval Dataset | Metric |
|:-----------------------|---------:|
| ImageNet 1k | 0.81396 |
| Caltech-101 | 0.953141 |
| CIFAR-10 | 0.9836 |
| CIFAR-100 | 0.8835 |
| CLEVR Counts | 0.3338 |
| CLEVR Distance | 0.248733 |
| Country211 | 0.28237 |
| Describable Textures | 0.66117 |
| EuroSAT | 0.646296 |
| FGVC Aircraft | 0.395945 |
| Food-101 | 0.945861 |
| GTSRB | 0.616152 |
| ImageNet Sketch | 0.683311 |
| ImageNet v2 | 0.7453 |
| ImageNet-A | 0.6676 |
| ImageNet-O | 0.3915 |
| ImageNet-R | 0.900033 |
| KITTI Vehicle Distance | 0.201125 |
| MNIST | 0.8468 |
| ObjectNet | 0.739367 |
| Oxford Flowers-102 | 0.865822 |
| Oxford-IIIT Pet | 0.954941 |
| Pascal VOC 2007 | 0.81644 |
| PatchCamelyon | 0.63028 |
| Rendered SST2 | 0.551345 |
| RESISC45 | 0.733175 |
| Stanford Cars | 0.947146 |
| STL-10 | 0.976625 |
| SUN397 | 0.754565 |
| SVHN | 0.653503 |
| Flickr | 0.8244 |
| MSCOCO | 0.570363 |
| WinoGAViL | 0.551645 |
| iWildCam | 0.18877 |
| Camelyon17 | 0.626179 |
| FMoW | 0.222137 |
| Dollar Street | 0.688084 |
| GeoDE | 0.91023 |
| **Average** | **0.668558** |
## Model Usage
### With OpenCLIP
```
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:apple/DFN2B-CLIP-ViT-L-14')
tokenizer = get_tokenizer('ViT-L-14')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features = F.normalize(image_features, dim=-1)
text_features = F.normalize(text_features, dim=-1)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
```
## Citation
```bibtex
@article{fang2023data,
title={Data Filtering Networks},
author={Fang, Alex and Jose, Albin Madappally and Jain, Amit and Schmidt, Ludwig and Toshev, Alexander and Shankar, Vaishaal},
journal={arXiv preprint arXiv:2309.17425},
year={2023}
}
```
|
apple/DFN5B-CLIP-ViT-H-14-378
|
apple
| 2025-02-28T18:39:32Z | 317,539 | 84 |
open_clip
|
[
"open_clip",
"pytorch",
"clip",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] | null | 2023-10-30T23:08:21Z |
---
license: apple-amlr
license_name: apple-sample-code-license
license_link: LICENSE
---
A CLIP (Contrastive Language-Image Pre-training) model trained on DFN-5B.
Data Filtering Networks (DFNs) are small networks used to automatically filter large pools of uncurated data.
This model was trained on 5B images that were filtered from a pool of 43B uncurated image-text pairs
(12.8B image-text pairs from CommonPool-12.8B + 30B additional public image-text pairs).
This model has been converted to PyTorch from the original JAX checkpoints from Axlearn (https://github.com/apple/axlearn).
These weights are directly usable in OpenCLIP (image + text).
## Model Details
- **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
- **Dataset:** DFN-5b
- **Papers:**
- Data Filtering Networks: https://arxiv.org/abs/2309.17425
- **Samples Seen:** 39B (224 x 224) + 5B (384 x 384)
## Model Metrics
| dataset | metric |
|:-----------------------|---------:|
| ImageNet 1k | 0.84218 |
| Caltech-101 | 0.954479 |
| CIFAR-10 | 0.9879 |
| CIFAR-100 | 0.9041 |
| CLEVR Counts | 0.362467 |
| CLEVR Distance | 0.206067 |
| Country211 | 0.37673 |
| Describable Textures | 0.71383 |
| EuroSAT | 0.608333 |
| FGVC Aircraft | 0.719938 |
| Food-101 | 0.963129 |
| GTSRB | 0.679018 |
| ImageNet Sketch | 0.73338 |
| ImageNet v2 | 0.7837 |
| ImageNet-A | 0.7992 |
| ImageNet-O | 0.3785 |
| ImageNet-R | 0.937633 |
| KITTI Vehicle Distance | 0.38256 |
| MNIST | 0.8372 |
| ObjectNet <sup>1</sup> | 0.796867 |
| Oxford Flowers-102 | 0.896834 |
| Oxford-IIIT Pet | 0.966841 |
| Pascal VOC 2007 | 0.826255 |
| PatchCamelyon | 0.695953 |
| Rendered SST2 | 0.566722 |
| RESISC45 | 0.755079 |
| Stanford Cars | 0.959955 |
| STL-10 | 0.991125 |
| SUN397 | 0.772799 |
| SVHN | 0.671251 |
| Flickr | 0.8808 |
| MSCOCO | 0.636889 |
| WinoGAViL | 0.571813 |
| iWildCam | 0.224911 |
| Camelyon17 | 0.711536 |
| FMoW | 0.209024 |
| Dollar Street | 0.71729 |
| GeoDE | 0.935699 |
| **Average** | **0.709421** |
[1]: Center-crop pre-processing used for ObjectNet (squashing results in lower accuracy of 0.737)
## Model Usage
### With OpenCLIP
```
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:apple/DFN5B-CLIP-ViT-H-14-384')
tokenizer = get_tokenizer('ViT-H-14')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features = F.normalize(image_features, dim=-1)
text_features = F.normalize(text_features, dim=-1)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
```
## Citation
```bibtex
@article{fang2023data,
title={Data Filtering Networks},
author={Fang, Alex and Jose, Albin Madappally and Jain, Amit and Schmidt, Ludwig and Toshev, Alexander and Shankar, Vaishaal},
journal={arXiv preprint arXiv:2309.17425},
year={2023}
}
```
|
apple/MobileCLIP-B
|
apple
| 2025-02-28T18:39:28Z | 23 | 2 |
mobileclip
|
[
"mobileclip",
"arxiv:2311.17049",
"arxiv:2103.00020",
"arxiv:2303.15343",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] | null | 2024-03-06T16:35:56Z |
---
license: apple-amlr
license_name: apple-ascl
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE_weights_data
library_name: mobileclip
---
# MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
MobileCLIP was introduced in [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
](https://arxiv.org/pdf/2311.17049.pdf) (CVPR 2024), by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
This repository contains the **MobileCLIP-B** checkpoint.

### Highlights
* Our smallest variant `MobileCLIP-S0` obtains similar zero-shot performance as [OpenAI](https://arxiv.org/abs/2103.00020)'s ViT-B/16 model while being 4.8x faster and 2.8x smaller.
* `MobileCLIP-S2` obtains better avg zero-shot performance than [SigLIP](https://arxiv.org/abs/2303.15343)'s ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples.
* `MobileCLIP-B`(LT) attains zero-shot ImageNet performance of **77.2%** which is significantly better than recent works like [DFN](https://arxiv.org/abs/2309.17425) and [SigLIP](https://arxiv.org/abs/2303.15343) with similar architectures or even [OpenAI's ViT-L/14@336](https://arxiv.org/abs/2103.00020).
## Checkpoints
| Model | # Seen <BR>Samples (B) | # Params (M) <BR> (img + txt) | Latency (ms) <BR> (img + txt) | IN-1k Zero-Shot <BR> Top-1 Acc. (%) | Avg. Perf. (%) <BR> on 38 datasets |
|:----------------------------------------------------------|:----------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------------:|:----------------------------------:|
| [MobileCLIP-S0](https://hf.co/pcuenq/MobileCLIP-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 67.8 | 58.1 |
| [MobileCLIP-S1](https://hf.co/pcuenq/MobileCLIP-S1) | 13 | 21.5 + 63.4 | 2.5 + 3.3 | 72.6 | 61.3 |
| [MobileCLIP-S2](https://hf.co/pcuenq/MobileCLIP-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 74.4 | 63.7 |
| [MobileCLIP-B](https://hf.co/pcuenq/MobileCLIP-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 76.8 | 65.2 |
| [MobileCLIP-B (LT)](https://hf.co/pcuenq/MobileCLIP-B-LT) | 36 | 86.3 + 63.4 | 10.4 + 3.3 | 77.2 | 65.8 |
## How to Use
First, download the desired checkpoint visiting one of the links in the table above, then click the `Files and versions` tab, and download the PyTorch checkpoint.
For programmatic downloading, if you have `huggingface_hub` installed, you can also run:
```
huggingface-cli download pcuenq/MobileCLIP-B
```
Then, install [`ml-mobileclip`](https://github.com/apple/ml-mobileclip) by following the instructions in the repo. It uses an API similar to [`open_clip`'s](https://github.com/mlfoundations/open_clip).
You can run inference with a code snippet like the following:
```py
import torch
from PIL import Image
import mobileclip
model, _, preprocess = mobileclip.create_model_and_transforms('mobileclip_b', pretrained='/path/to/mobileclip_b.pt')
tokenizer = mobileclip.get_tokenizer('mobileclip_b')
image = preprocess(Image.open("docs/fig_accuracy_latency.png").convert('RGB')).unsqueeze(0)
text = tokenizer(["a diagram", "a dog", "a cat"])
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs)
```
|
apple/MobileCLIP-S2
|
apple
| 2025-02-28T18:39:27Z | 42 | 6 |
mobileclip
|
[
"mobileclip",
"arxiv:2311.17049",
"arxiv:2103.00020",
"arxiv:2303.15343",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] | null | 2024-03-06T17:14:03Z |
---
license: apple-amlr
license_name: apple-ascl
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE_weights_data
library_name: mobileclip
---
# MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
MobileCLIP was introduced in [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
](https://arxiv.org/pdf/2311.17049.pdf) (CVPR 2024), by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
This repository contains the **MobileCLIP-S2** checkpoint.

### Highlights
* Our smallest variant `MobileCLIP-S0` obtains similar zero-shot performance as [OpenAI](https://arxiv.org/abs/2103.00020)'s ViT-B/16 model while being 4.8x faster and 2.8x smaller.
* `MobileCLIP-S2` obtains better avg zero-shot performance than [SigLIP](https://arxiv.org/abs/2303.15343)'s ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples.
* `MobileCLIP-B`(LT) attains zero-shot ImageNet performance of **77.2%** which is significantly better than recent works like [DFN](https://arxiv.org/abs/2309.17425) and [SigLIP](https://arxiv.org/abs/2303.15343) with similar architectures or even [OpenAI's ViT-L/14@336](https://arxiv.org/abs/2103.00020).
## Checkpoints
| Model | # Seen <BR>Samples (B) | # Params (M) <BR> (img + txt) | Latency (ms) <BR> (img + txt) | IN-1k Zero-Shot <BR> Top-1 Acc. (%) | Avg. Perf. (%) <BR> on 38 datasets |
|:----------------------------------------------------------|:----------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------------:|:----------------------------------:|
| [MobileCLIP-S0](https://hf.co/pcuenq/MobileCLIP-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 67.8 | 58.1 |
| [MobileCLIP-S1](https://hf.co/pcuenq/MobileCLIP-S1) | 13 | 21.5 + 63.4 | 2.5 + 3.3 | 72.6 | 61.3 |
| [MobileCLIP-S2](https://hf.co/pcuenq/MobileCLIP-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 74.4 | 63.7 |
| [MobileCLIP-B](https://hf.co/pcuenq/MobileCLIP-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 76.8 | 65.2 |
| [MobileCLIP-B (LT)](https://hf.co/pcuenq/MobileCLIP-B-LT) | 36 | 86.3 + 63.4 | 10.4 + 3.3 | 77.2 | 65.8 |
## How to Use
First, download the desired checkpoint visiting one of the links in the table above, then click the `Files and versions` tab, and download the PyTorch checkpoint.
For programmatic downloading, if you have `huggingface_hub` installed, you can also run:
```
huggingface-cli download pcuenq/MobileCLIP-S2
```
Then, install [`ml-mobileclip`](https://github.com/apple/ml-mobileclip) by following the instructions in the repo. It uses an API similar to [`open_clip`'s](https://github.com/mlfoundations/open_clip).
You can run inference with a code snippet like the following:
```py
import torch
from PIL import Image
import mobileclip
model, _, preprocess = mobileclip.create_model_and_transforms('mobileclip_s2', pretrained='/path/to/mobileclip_s2.pt')
tokenizer = mobileclip.get_tokenizer('mobileclip_s2')
image = preprocess(Image.open("docs/fig_accuracy_latency.png").convert('RGB')).unsqueeze(0)
text = tokenizer(["a diagram", "a dog", "a cat"])
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs)
```
|
apple/MobileCLIP-S1
|
apple
| 2025-02-28T18:39:26Z | 28 | 4 |
mobileclip
|
[
"mobileclip",
"arxiv:2311.17049",
"arxiv:2103.00020",
"arxiv:2303.15343",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] | null | 2024-03-06T17:13:13Z |
---
license: apple-amlr
license_name: apple-ascl
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE_weights_data
library_name: mobileclip
---
# MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
MobileCLIP was introduced in [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
](https://arxiv.org/pdf/2311.17049.pdf) (CVPR 2024), by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
This repository contains the **MobileCLIP-S1** checkpoint.

### Highlights
* Our smallest variant `MobileCLIP-S0` obtains similar zero-shot performance as [OpenAI](https://arxiv.org/abs/2103.00020)'s ViT-B/16 model while being 4.8x faster and 2.8x smaller.
* `MobileCLIP-S2` obtains better avg zero-shot performance than [SigLIP](https://arxiv.org/abs/2303.15343)'s ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples.
* `MobileCLIP-B`(LT) attains zero-shot ImageNet performance of **77.2%** which is significantly better than recent works like [DFN](https://arxiv.org/abs/2309.17425) and [SigLIP](https://arxiv.org/abs/2303.15343) with similar architectures or even [OpenAI's ViT-L/14@336](https://arxiv.org/abs/2103.00020).
## Checkpoints
| Model | # Seen <BR>Samples (B) | # Params (M) <BR> (img + txt) | Latency (ms) <BR> (img + txt) | IN-1k Zero-Shot <BR> Top-1 Acc. (%) | Avg. Perf. (%) <BR> on 38 datasets |
|:----------------------------------------------------------|:----------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------------:|:----------------------------------:|
| [MobileCLIP-S0](https://hf.co/pcuenq/MobileCLIP-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 67.8 | 58.1 |
| [MobileCLIP-S1](https://hf.co/pcuenq/MobileCLIP-S1) | 13 | 21.5 + 63.4 | 2.5 + 3.3 | 72.6 | 61.3 |
| [MobileCLIP-S2](https://hf.co/pcuenq/MobileCLIP-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 74.4 | 63.7 |
| [MobileCLIP-B](https://hf.co/pcuenq/MobileCLIP-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 76.8 | 65.2 |
| [MobileCLIP-B (LT)](https://hf.co/pcuenq/MobileCLIP-B-LT) | 36 | 86.3 + 63.4 | 10.4 + 3.3 | 77.2 | 65.8 |
## How to Use
First, download the desired checkpoint visiting one of the links in the table above, then click the `Files and versions` tab, and download the PyTorch checkpoint.
For programmatic downloading, if you have `huggingface_hub` installed, you can also run:
```
huggingface-cli download pcuenq/MobileCLIP-S1
```
Then, install [`ml-mobileclip`](https://github.com/apple/ml-mobileclip) by following the instructions in the repo. It uses an API similar to [`open_clip`'s](https://github.com/mlfoundations/open_clip).
You can run inference with a code snippet like the following:
```py
import torch
from PIL import Image
import mobileclip
model, _, preprocess = mobileclip.create_model_and_transforms('mobileclip_s1', pretrained='/path/to/mobileclip_s1.pt')
tokenizer = mobileclip.get_tokenizer('mobileclip_s1')
image = preprocess(Image.open("docs/fig_accuracy_latency.png").convert('RGB')).unsqueeze(0)
text = tokenizer(["a diagram", "a dog", "a cat"])
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs)
```
|
apple/MobileCLIP-S2-OpenCLIP
|
apple
| 2025-02-28T18:39:24Z | 44,579 | 6 |
open_clip
|
[
"open_clip",
"safetensors",
"clip",
"zero-shot-image-classification",
"arxiv:2311.17049",
"arxiv:2103.00020",
"arxiv:2303.15343",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] |
zero-shot-image-classification
| 2024-06-07T14:48:32Z |
---
tags:
- clip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: apple-amlr
license_name: apple-ascl
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE_weights_data
---
# MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
MobileCLIP was introduced in [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
](https://arxiv.org/pdf/2311.17049.pdf) (CVPR 2024), by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
This repository contains the **MobileCLIP-S2** checkpoint for OpenCLIP.

### Highlights
* Our smallest variant `MobileCLIP-S0` obtains similar zero-shot performance as [OpenAI](https://arxiv.org/abs/2103.00020)'s ViT-B/16 model while being 4.8x faster and 2.8x smaller.
* `MobileCLIP-S2` obtains better avg zero-shot performance than [SigLIP](https://arxiv.org/abs/2303.15343)'s ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples.
* `MobileCLIP-B`(LT) attains zero-shot ImageNet performance of **77.2%** which is significantly better than recent works like [DFN](https://arxiv.org/abs/2309.17425) and [SigLIP](https://arxiv.org/abs/2303.15343) with similar architectures or even [OpenAI's ViT-L/14@336](https://arxiv.org/abs/2103.00020).
## Checkpoints
| Model | # Seen <BR>Samples (B) | # Params (M) <BR> (img + txt) | Latency (ms) <BR> (img + txt) | IN-1k Zero-Shot <BR> Top-1 Acc. (%) | Avg. Perf. (%) <BR> on 38 datasets |
|:----------------------------------------------------------|:----------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------------:|:----------------------------------:|
| [MobileCLIP-S0](https://hf.co/pcuenq/MobileCLIP-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 67.8 | 58.1 |
| [MobileCLIP-S1](https://hf.co/pcuenq/MobileCLIP-S1) | 13 | 21.5 + 63.4 | 2.5 + 3.3 | 72.6 | 61.3 |
| [MobileCLIP-S2](https://hf.co/pcuenq/MobileCLIP-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 74.4 | 63.7 |
| [MobileCLIP-B](https://hf.co/pcuenq/MobileCLIP-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 76.8 | 65.2 |
| [MobileCLIP-B (LT)](https://hf.co/pcuenq/MobileCLIP-B-LT) | 36 | 86.3 + 63.4 | 10.4 + 3.3 | 77.2 | 65.8 |
|
apple/MobileCLIP-S1-OpenCLIP
|
apple
| 2025-02-28T18:39:23Z | 2,704 | 10 |
open_clip
|
[
"open_clip",
"safetensors",
"clip",
"zero-shot-image-classification",
"arxiv:2311.17049",
"arxiv:2103.00020",
"arxiv:2303.15343",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] |
zero-shot-image-classification
| 2024-06-07T14:44:41Z |
---
tags:
- clip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: apple-amlr
license_name: apple-ascl
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE_weights_data
---
# MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
MobileCLIP was introduced in [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
](https://arxiv.org/pdf/2311.17049.pdf) (CVPR 2024), by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
This repository contains the **MobileCLIP-S1** checkpoint for OpenCLIP.

### Highlights
* Our smallest variant `MobileCLIP-S0` obtains similar zero-shot performance as [OpenAI](https://arxiv.org/abs/2103.00020)'s ViT-B/16 model while being 4.8x faster and 2.8x smaller.
* `MobileCLIP-S2` obtains better avg zero-shot performance than [SigLIP](https://arxiv.org/abs/2303.15343)'s ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples.
* `MobileCLIP-B`(LT) attains zero-shot ImageNet performance of **77.2%** which is significantly better than recent works like [DFN](https://arxiv.org/abs/2309.17425) and [SigLIP](https://arxiv.org/abs/2303.15343) with similar architectures or even [OpenAI's ViT-L/14@336](https://arxiv.org/abs/2103.00020).
## Checkpoints
| Model | # Seen <BR>Samples (B) | # Params (M) <BR> (img + txt) | Latency (ms) <BR> (img + txt) | IN-1k Zero-Shot <BR> Top-1 Acc. (%) | Avg. Perf. (%) <BR> on 38 datasets |
|:----------------------------------------------------------|:----------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------------:|:----------------------------------:|
| [MobileCLIP-S0](https://hf.co/pcuenq/MobileCLIP-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 67.8 | 58.1 |
| [MobileCLIP-S1](https://hf.co/pcuenq/MobileCLIP-S1) | 13 | 21.5 + 63.4 | 2.5 + 3.3 | 72.6 | 61.3 |
| [MobileCLIP-S2](https://hf.co/pcuenq/MobileCLIP-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 74.4 | 63.7 |
| [MobileCLIP-B](https://hf.co/pcuenq/MobileCLIP-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 76.8 | 65.2 |
| [MobileCLIP-B (LT)](https://hf.co/pcuenq/MobileCLIP-B-LT) | 36 | 86.3 + 63.4 | 10.4 + 3.3 | 77.2 | 65.8 |
|
apple/mobileclip_b_timm
|
apple
| 2025-02-28T18:39:22Z | 104 | 2 |
timm
|
[
"timm",
"pytorch",
"image-classification",
"arxiv:2311.17049",
"arxiv:2103.00020",
"arxiv:2303.15343",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] |
image-classification
| 2024-06-07T18:14:19Z |
---
tags:
- image-classification
- timm
library_name: timm
license: apple-amlr
license_name: apple-ascl
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE_weights_data
---
# MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
MobileCLIP was introduced in [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
](https://arxiv.org/pdf/2311.17049.pdf) (CVPR 2024), by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
This repository contains the **MobileCLIP-B** checkpoint for timm.

### Highlights
* Our smallest variant `MobileCLIP-S0` obtains similar zero-shot performance as [OpenAI](https://arxiv.org/abs/2103.00020)'s ViT-B/16 model while being 4.8x faster and 2.8x smaller.
* `MobileCLIP-S2` obtains better avg zero-shot performance than [SigLIP](https://arxiv.org/abs/2303.15343)'s ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples.
* `MobileCLIP-B`(LT) attains zero-shot ImageNet performance of **77.2%** which is significantly better than recent works like [DFN](https://arxiv.org/abs/2309.17425) and [SigLIP](https://arxiv.org/abs/2303.15343) with similar architectures or even [OpenAI's ViT-L/14@336](https://arxiv.org/abs/2103.00020).
## Checkpoints
| Model | # Seen <BR>Samples (B) | # Params (M) <BR> (img + txt) | Latency (ms) <BR> (img + txt) | IN-1k Zero-Shot <BR> Top-1 Acc. (%) | Avg. Perf. (%) <BR> on 38 datasets |
|:----------------------------------------------------------|:----------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------------:|:----------------------------------:|
| [MobileCLIP-S0](https://hf.co/pcuenq/MobileCLIP-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 67.8 | 58.1 |
| [MobileCLIP-S1](https://hf.co/pcuenq/MobileCLIP-S1) | 13 | 21.5 + 63.4 | 2.5 + 3.3 | 72.6 | 61.3 |
| [MobileCLIP-S2](https://hf.co/pcuenq/MobileCLIP-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 74.4 | 63.7 |
| [MobileCLIP-B](https://hf.co/pcuenq/MobileCLIP-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 76.8 | 65.2 |
| [MobileCLIP-B (LT)](https://hf.co/pcuenq/MobileCLIP-B-LT) | 36 | 86.3 + 63.4 | 10.4 + 3.3 | 77.2 | 65.8 |
|
apple/mobileclip_b_lt_timm
|
apple
| 2025-02-28T18:39:22Z | 4,213 | 5 |
timm
|
[
"timm",
"pytorch",
"image-classification",
"arxiv:2311.17049",
"arxiv:2103.00020",
"arxiv:2303.15343",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] |
image-classification
| 2024-06-07T18:17:32Z |
---
tags:
- image-classification
- timm
library_name: timm
license: apple-amlr
license_name: apple-ascl
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE_weights_data
---
# MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
MobileCLIP was introduced in [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
](https://arxiv.org/pdf/2311.17049.pdf) (CVPR 2024), by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
This repository contains the **MobileCLIP-B (LT)** checkpoint for timm.

### Highlights
* Our smallest variant `MobileCLIP-S0` obtains similar zero-shot performance as [OpenAI](https://arxiv.org/abs/2103.00020)'s ViT-B/16 model while being 4.8x faster and 2.8x smaller.
* `MobileCLIP-S2` obtains better avg zero-shot performance than [SigLIP](https://arxiv.org/abs/2303.15343)'s ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples.
* `MobileCLIP-B`(LT) attains zero-shot ImageNet performance of **77.2%** which is significantly better than recent works like [DFN](https://arxiv.org/abs/2309.17425) and [SigLIP](https://arxiv.org/abs/2303.15343) with similar architectures or even [OpenAI's ViT-L/14@336](https://arxiv.org/abs/2103.00020).
## Checkpoints
| Model | # Seen <BR>Samples (B) | # Params (M) <BR> (img + txt) | Latency (ms) <BR> (img + txt) | IN-1k Zero-Shot <BR> Top-1 Acc. (%) | Avg. Perf. (%) <BR> on 38 datasets |
|:----------------------------------------------------------|:----------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------------:|:----------------------------------:|
| [MobileCLIP-S0](https://hf.co/pcuenq/MobileCLIP-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 67.8 | 58.1 |
| [MobileCLIP-S1](https://hf.co/pcuenq/MobileCLIP-S1) | 13 | 21.5 + 63.4 | 2.5 + 3.3 | 72.6 | 61.3 |
| [MobileCLIP-S2](https://hf.co/pcuenq/MobileCLIP-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 74.4 | 63.7 |
| [MobileCLIP-B](https://hf.co/pcuenq/MobileCLIP-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 76.8 | 65.2 |
| [MobileCLIP-B (LT)](https://hf.co/pcuenq/MobileCLIP-B-LT) | 36 | 86.3 + 63.4 | 10.4 + 3.3 | 77.2 | 65.8 |
|
apple/mobileclip_s2_timm
|
apple
| 2025-02-28T18:39:21Z | 327 | 4 |
timm
|
[
"timm",
"pytorch",
"image-classification",
"arxiv:2311.17049",
"arxiv:2103.00020",
"arxiv:2303.15343",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] |
image-classification
| 2024-06-06T10:23:38Z |
---
tags:
- image-classification
- timm
library_name: timm
license: apple-amlr
license_name: apple-ascl
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE_weights_data
---
# MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
MobileCLIP was introduced in [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
](https://arxiv.org/pdf/2311.17049.pdf) (CVPR 2024), by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
This repository contains the **MobileCLIP-S2** checkpoint for timm.

### Highlights
* Our smallest variant `MobileCLIP-S0` obtains similar zero-shot performance as [OpenAI](https://arxiv.org/abs/2103.00020)'s ViT-B/16 model while being 4.8x faster and 2.8x smaller.
* `MobileCLIP-S2` obtains better avg zero-shot performance than [SigLIP](https://arxiv.org/abs/2303.15343)'s ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples.
* `MobileCLIP-B`(LT) attains zero-shot ImageNet performance of **77.2%** which is significantly better than recent works like [DFN](https://arxiv.org/abs/2309.17425) and [SigLIP](https://arxiv.org/abs/2303.15343) with similar architectures or even [OpenAI's ViT-L/14@336](https://arxiv.org/abs/2103.00020).
## Checkpoints
| Model | # Seen <BR>Samples (B) | # Params (M) <BR> (img + txt) | Latency (ms) <BR> (img + txt) | IN-1k Zero-Shot <BR> Top-1 Acc. (%) | Avg. Perf. (%) <BR> on 38 datasets |
|:----------------------------------------------------------|:----------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------------:|:----------------------------------:|
| [MobileCLIP-S0](https://hf.co/pcuenq/MobileCLIP-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 67.8 | 58.1 |
| [MobileCLIP-S1](https://hf.co/pcuenq/MobileCLIP-S1) | 13 | 21.5 + 63.4 | 2.5 + 3.3 | 72.6 | 61.3 |
| [MobileCLIP-S2](https://hf.co/pcuenq/MobileCLIP-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 74.4 | 63.7 |
| [MobileCLIP-B](https://hf.co/pcuenq/MobileCLIP-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 76.8 | 65.2 |
| [MobileCLIP-B (LT)](https://hf.co/pcuenq/MobileCLIP-B-LT) | 36 | 86.3 + 63.4 | 10.4 + 3.3 | 77.2 | 65.8 |
|
apple/mobileclip_s1_timm
|
apple
| 2025-02-28T18:39:20Z | 108 | 2 |
timm
|
[
"timm",
"pytorch",
"image-classification",
"arxiv:2311.17049",
"arxiv:2103.00020",
"arxiv:2303.15343",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] |
image-classification
| 2024-06-06T10:22:47Z |
---
tags:
- image-classification
- timm
library_name: timm
license: apple-amlr
license_name: apple-ascl
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE_weights_data
---
# MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
MobileCLIP was introduced in [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
](https://arxiv.org/pdf/2311.17049.pdf) (CVPR 2024), by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
This repository contains the **MobileCLIP-S1** checkpoint for timm.

### Highlights
* Our smallest variant `MobileCLIP-S0` obtains similar zero-shot performance as [OpenAI](https://arxiv.org/abs/2103.00020)'s ViT-B/16 model while being 4.8x faster and 2.8x smaller.
* `MobileCLIP-S2` obtains better avg zero-shot performance than [SigLIP](https://arxiv.org/abs/2303.15343)'s ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples.
* `MobileCLIP-B`(LT) attains zero-shot ImageNet performance of **77.2%** which is significantly better than recent works like [DFN](https://arxiv.org/abs/2309.17425) and [SigLIP](https://arxiv.org/abs/2303.15343) with similar architectures or even [OpenAI's ViT-L/14@336](https://arxiv.org/abs/2103.00020).
## Checkpoints
| Model | # Seen <BR>Samples (B) | # Params (M) <BR> (img + txt) | Latency (ms) <BR> (img + txt) | IN-1k Zero-Shot <BR> Top-1 Acc. (%) | Avg. Perf. (%) <BR> on 38 datasets |
|:----------------------------------------------------------|:----------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------------:|:----------------------------------:|
| [MobileCLIP-S0](https://hf.co/pcuenq/MobileCLIP-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 67.8 | 58.1 |
| [MobileCLIP-S1](https://hf.co/pcuenq/MobileCLIP-S1) | 13 | 21.5 + 63.4 | 2.5 + 3.3 | 72.6 | 61.3 |
| [MobileCLIP-S2](https://hf.co/pcuenq/MobileCLIP-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 74.4 | 63.7 |
| [MobileCLIP-B](https://hf.co/pcuenq/MobileCLIP-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 76.8 | 65.2 |
| [MobileCLIP-B (LT)](https://hf.co/pcuenq/MobileCLIP-B-LT) | 36 | 86.3 + 63.4 | 10.4 + 3.3 | 77.2 | 65.8 |
|
apple/mobileclip_s0_timm
|
apple
| 2025-02-28T18:39:20Z | 157 | 10 |
timm
|
[
"timm",
"pytorch",
"image-classification",
"arxiv:2311.17049",
"arxiv:2103.00020",
"arxiv:2303.15343",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] |
image-classification
| 2024-06-06T10:18:00Z |
---
tags:
- image-classification
- timm
library_name: timm
license: apple-amlr
license_name: apple-ascl
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE_weights_data
---
# MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
MobileCLIP was introduced in [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
](https://arxiv.org/pdf/2311.17049.pdf) (CVPR 2024), by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
This repository contains the **MobileCLIP-S0** checkpoint compatible with TIMM.

### Highlights
* Our smallest variant `MobileCLIP-S0` obtains similar zero-shot performance as [OpenAI](https://arxiv.org/abs/2103.00020)'s ViT-B/16 model while being 4.8x faster and 2.8x smaller.
* `MobileCLIP-S2` obtains better avg zero-shot performance than [SigLIP](https://arxiv.org/abs/2303.15343)'s ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples.
* `MobileCLIP-B`(LT) attains zero-shot ImageNet performance of **77.2%** which is significantly better than recent works like [DFN](https://arxiv.org/abs/2309.17425) and [SigLIP](https://arxiv.org/abs/2303.15343) with similar architectures or even [OpenAI's ViT-L/14@336](https://arxiv.org/abs/2103.00020).
## Checkpoints
| Model | # Seen <BR>Samples (B) | # Params (M) <BR> (img + txt) | Latency (ms) <BR> (img + txt) | IN-1k Zero-Shot <BR> Top-1 Acc. (%) | Avg. Perf. (%) <BR> on 38 datasets |
|:----------------------------------------------------------|:----------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------------:|:----------------------------------:|
| [MobileCLIP-S0](https://hf.co/pcuenq/MobileCLIP-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 67.8 | 58.1 |
| [MobileCLIP-S1](https://hf.co/pcuenq/MobileCLIP-S1) | 13 | 21.5 + 63.4 | 2.5 + 3.3 | 72.6 | 61.3 |
| [MobileCLIP-S2](https://hf.co/pcuenq/MobileCLIP-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 74.4 | 63.7 |
| [MobileCLIP-B](https://hf.co/pcuenq/MobileCLIP-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 76.8 | 65.2 |
| [MobileCLIP-B (LT)](https://hf.co/pcuenq/MobileCLIP-B-LT) | 36 | 86.3 + 63.4 | 10.4 + 3.3 | 77.2 | 65.8 |
|
Bu-Guru-Salsa-Original-X-TV/Bu-Guru-Salsa.viral.video.on.social.media.x.twitter.now
|
Bu-Guru-Salsa-Original-X-TV
| 2025-02-28T18:37:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-28T18:35:03Z |
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://lekedvideo.xyz/watch/?V=Bu-Guru-Salsa)
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)](https://lekedvideo.xyz/watch/?V=Bu-Guru-Salsa)
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://lekedvideo.xyz/watch/?V=Bu-Guru-Salsa)
|
musa99/teachim
|
musa99
| 2025-02-28T18:37:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"base_model:adapter:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"region:us"
] | null | 2025-02-28T16:47:31Z |
---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
lesso07/12a46106-1406-4c89-b7cb-f0342a244ed4
|
lesso07
| 2025-02-28T18:34:59Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Pro-Llama-3-8B",
"license:llama3",
"region:us"
] | null | 2025-02-28T17:18:57Z |
---
library_name: peft
license: llama3
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 12a46106-1406-4c89-b7cb-f0342a244ed4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fbea0958a4608408_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fbea0958a4608408_train_data.json
type:
field_input: Example
field_instruction: '@partOfSpeech'
field_output: Definition
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso07/12a46106-1406-4c89-b7cb-f0342a244ed4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000207
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/fbea0958a4608408_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 70
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3032058a-ae5e-4c82-93d3-03dac098fbaf
wandb_project: 07a
wandb_run: your_name
wandb_runid: 3032058a-ae5e-4c82-93d3-03dac098fbaf
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 12a46106-1406-4c89-b7cb-f0342a244ed4
This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000207
- train_batch_size: 4
- eval_batch_size: 4
- seed: 70
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 7.1015 |
| 3.5131 | 0.0035 | 50 | 3.6608 |
| 3.1439 | 0.0070 | 100 | 3.6134 |
| 3.2374 | 0.0105 | 150 | 3.1345 |
| 3.5958 | 0.0140 | 200 | 3.3461 |
| 3.2674 | 0.0175 | 250 | 2.9513 |
| 3.3788 | 0.0211 | 300 | 2.9841 |
| 3.3656 | 0.0246 | 350 | 2.8612 |
| 3.3637 | 0.0281 | 400 | 2.6070 |
| 3.5584 | 0.0316 | 450 | 2.5685 |
| 3.4697 | 0.0351 | 500 | 2.5668 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Sapna-Shah-Video-X/VIRAL.Sapna-Shah.Viral.Video.Full.Original.Video.Social.Media.X
|
Sapna-Shah-Video-X
| 2025-02-28T18:34:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-28T18:33:59Z |
<p><a href="https://t.co/f7ohVkpVkt">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a href="https://t.co/f7ohVkpVkt">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p>
|
cst7/3d-icon-Flux-LoRA_with_T5
|
cst7
| 2025-02-28T18:32:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-28T17:31:04Z |
---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: other
instance_prompt: 3d icon in the style of <s0><s1>
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - cst7/3d-icon-Flux-LoRA_with_T5
<Gallery />
## Model description
These are cst7/3d-icon-Flux-LoRA_with_T5 DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was LoRA for the text encoder enabled? False.
Pivotal tuning was enabled: True.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` โ use `<s0><s1>` in your prompt
## Download model
[Download the *.safetensors LoRA](cst7/3d-icon-Flux-LoRA_with_T5/tree/main) in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('cst7/3d-icon-Flux-LoRA_with_T5', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='cst7/3d-icon-Flux-LoRA_with_T5', filename='output/3d-icon-Flux-LoRA_with_T5_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["t5"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('3d icon in the style of <s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
apple/DepthPro-mixin
|
apple
| 2025-02-28T18:31:42Z | 32 | 5 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"depth-estimation",
"arxiv:2410.02073",
"license:apple-amlr",
"region:us"
] |
depth-estimation
| 2024-10-05T00:23:52Z |
---
license: apple-amlr
pipeline_tag: depth-estimation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
# Depth Pro: Sharp Monocular Metric Depth in Less Than a Second

We present a foundation model for zero-shot metric monocular depth estimation. Our model, Depth Pro, synthesizes high-resolution depth maps with unparalleled sharpness and high-frequency details. The predictions are metric, with absolute scale, without relying on the availability of metadata such as camera intrinsics. And the model is fast, producing a 2.25-megapixel depth map in 0.3 seconds on a standard GPU. These characteristics are enabled by a number of technical contributions, including an efficient multi-scale vision transformer for dense prediction, a training protocol that combines real and synthetic datasets to achieve high metric accuracy alongside fine boundary tracing, dedicated evaluation metrics for boundary accuracy in estimated depth maps, and state-of-the-art focal length estimation from a single image.
Depth Pro was introduced in **[Depth Pro: Sharp Monocular Metric Depth in Less Than a Second](https://arxiv.org/abs/2410.02073)**, by *Aleksei Bochkovskii, Amaรซl Delaunoy, Hugo Germain, Marcel Santos, Yichao Zhou, Stephan R. Richter, and Vladlen Koltun*.
The checkpoint in this repository is a reference implementation, which has been re-trained. Its performance is close to the model reported in the paper but does not match it exactly.
## How to Use
Please, follow the steps in the [code repository](https://github.com/apple/ml-depth-pro) to set up your environment. Then you can:
### Running from Python
```python
from huggingface_hub import PyTorchModelHubMixin
from depth_pro import create_model_and_transforms, load_rgb
from depth_pro.depth_pro import (create_backbone_model, load_monodepth_weights,
DepthPro, DepthProEncoder, MultiresConvDecoder)
import depth_pro
from torchvision.transforms import Compose, Normalize, ToTensor
class DepthProWrapper(DepthPro, PyTorchModelHubMixin):
"""Depth Pro network."""
def __init__(
self,
patch_encoder_preset: str,
image_encoder_preset: str,
decoder_features: str,
fov_encoder_preset: str,
use_fov_head: bool = True,
**kwargs,
):
"""Initialize Depth Pro."""
patch_encoder, patch_encoder_config = create_backbone_model(
preset=patch_encoder_preset
)
image_encoder, _ = create_backbone_model(
preset=image_encoder_preset
)
fov_encoder = None
if use_fov_head and fov_encoder_preset is not None:
fov_encoder, _ = create_backbone_model(preset=fov_encoder_preset)
dims_encoder = patch_encoder_config.encoder_feature_dims
hook_block_ids = patch_encoder_config.encoder_feature_layer_ids
encoder = DepthProEncoder(
dims_encoder=dims_encoder,
patch_encoder=patch_encoder,
image_encoder=image_encoder,
hook_block_ids=hook_block_ids,
decoder_features=decoder_features,
)
decoder = MultiresConvDecoder(
dims_encoder=[encoder.dims_encoder[0]] + list(encoder.dims_encoder),
dim_decoder=decoder_features,
)
super().__init__(
encoder=encoder,
decoder=decoder,
last_dims=(32, 1),
use_fov_head=use_fov_head,
fov_encoder=fov_encoder,
)
# Load model and preprocessing transform
model = DepthProWrapper.from_pretrained("apple/DepthPro-mixin")
transform = Compose(
[
ToTensor(),
Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
]
)
model.eval()
# Load and preprocess an image.
image, _, f_px = depth_pro.load_rgb(image_path)
image = transform(image)
# Run inference.
prediction = model.infer(image, f_px=f_px)
depth = prediction["depth"] # Depth in [m].
focallength_px = prediction["focallength_px"] # Focal length in pixels.
```
### Evaluation (boundary metrics)
Boundary metrics are implemented in `eval/boundary_metrics.py` and can be used as follows:
```python
# for a depth-based dataset
boundary_f1 = SI_boundary_F1(predicted_depth, target_depth)
# for a mask-based dataset (image matting / segmentation)
boundary_recall = SI_boundary_Recall(predicted_depth, target_mask)
```
## Citation
If you find our work useful, please cite the following paper:
```bibtex
@article{Bochkovskii2024:arxiv,
author = {Aleksei Bochkovskii and Ama\"{e}l Delaunoy and Hugo Germain and Marcel Santos and
Yichao Zhou and Stephan R. Richter and Vladlen Koltun}
title = {Depth Pro: Sharp Monocular Metric Depth in Less Than a Second},
journal = {arXiv},
year = {2024},
}
```
## Acknowledgements
Our codebase is built using multiple opensource contributions, please see [Acknowledgements](https://github.com/apple/ml-depth-pro/blob/main/ACKNOWLEDGEMENTS.md) for more details.
Please check the paper for a complete list of references and datasets used in this work.
|
apple/DepthPro
|
apple
| 2025-02-28T18:31:41Z | 2,025 | 403 |
depth-pro
|
[
"depth-pro",
"depth-estimation",
"arxiv:2410.02073",
"license:apple-amlr",
"region:us"
] |
depth-estimation
| 2024-10-03T14:45:37Z |
---
license: apple-amlr
pipeline_tag: depth-estimation
library_name: depth-pro
---
# Depth Pro: Sharp Monocular Metric Depth in Less Than a Second

We present a foundation model for zero-shot metric monocular depth estimation. Our model, Depth Pro, synthesizes high-resolution depth maps with unparalleled sharpness and high-frequency details. The predictions are metric, with absolute scale, without relying on the availability of metadata such as camera intrinsics. And the model is fast, producing a 2.25-megapixel depth map in 0.3 seconds on a standard GPU. These characteristics are enabled by a number of technical contributions, including an efficient multi-scale vision transformer for dense prediction, a training protocol that combines real and synthetic datasets to achieve high metric accuracy alongside fine boundary tracing, dedicated evaluation metrics for boundary accuracy in estimated depth maps, and state-of-the-art focal length estimation from a single image.
Depth Pro was introduced in **[Depth Pro: Sharp Monocular Metric Depth in Less Than a Second](https://arxiv.org/abs/2410.02073)**, by *Aleksei Bochkovskii, Amaรซl Delaunoy, Hugo Germain, Marcel Santos, Yichao Zhou, Stephan R. Richter, and Vladlen Koltun*.
The checkpoint in this repository is a reference implementation, which has been re-trained. Its performance is close to the model reported in the paper but does not match it exactly.
## How to Use
Please, follow the steps in the [code repository](https://github.com/apple/ml-depth-pro) to set up your environment. Then you can download the checkpoint from the _Files and versions_ tab above, or use the `huggingface-hub` CLI:
```bash
pip install huggingface-hub
huggingface-cli download --local-dir checkpoints apple/DepthPro
```
### Running from commandline
The code repo provides a helper script to run the model on a single image:
```bash
# Run prediction on a single image:
depth-pro-run -i ./data/example.jpg
# Run `depth-pro-run -h` for available options.
```
### Running from Python
```python
from PIL import Image
import depth_pro
# Load model and preprocessing transform
model, transform = depth_pro.create_model_and_transforms()
model.eval()
# Load and preprocess an image.
image, _, f_px = depth_pro.load_rgb(image_path)
image = transform(image)
# Run inference.
prediction = model.infer(image, f_px=f_px)
depth = prediction["depth"] # Depth in [m].
focallength_px = prediction["focallength_px"] # Focal length in pixels.
```
### Evaluation (boundary metrics)
Boundary metrics are implemented in `eval/boundary_metrics.py` and can be used as follows:
```python
# for a depth-based dataset
boundary_f1 = SI_boundary_F1(predicted_depth, target_depth)
# for a mask-based dataset (image matting / segmentation)
boundary_recall = SI_boundary_Recall(predicted_depth, target_mask)
```
## Citation
If you find our work useful, please cite the following paper:
```bibtex
@article{Bochkovskii2024:arxiv,
author = {Aleksei Bochkovskii and Ama\"{e}l Delaunoy and Hugo Germain and Marcel Santos and
Yichao Zhou and Stephan R. Richter and Vladlen Koltun}
title = {Depth Pro: Sharp Monocular Metric Depth in Less Than a Second},
journal = {arXiv},
year = {2024},
}
```
## Acknowledgements
Our codebase is built using multiple opensource contributions, please see [Acknowledgements](https://github.com/apple/ml-depth-pro/blob/main/ACKNOWLEDGEMENTS.md) for more details.
Please check the paper for a complete list of references and datasets used in this work.
|
apple/OpenELM-3B
|
apple
| 2025-02-28T18:31:38Z | 302 | 120 |
transformers
|
[
"transformers",
"safetensors",
"openelm",
"text-generation",
"custom_code",
"arxiv:2404.14619",
"license:apple-amlr",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-04-12T21:48:54Z |
---
license: apple-amlr
license_name: apple-sample-code-license
license_link: LICENSE
---
# OpenELM
*Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. We release the complete framework, encompassing data preparation, training, fine-tuning, and evaluation procedures, alongside multiple pre-trained checkpoints and training logs, to facilitate open research.
Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
## Usage
We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`.
You can try the model by running the following command:
```
python generate_openelm.py --model apple/OpenELM-3B --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2
```
Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token.
Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows:
```
python generate_openelm.py --model apple/OpenELM-3B --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
```
Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example:
```
python generate_openelm.py --model apple/OpenELM-3B --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL]
```
## Main Results
### Zero-Shot
| **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** |
### LLM360
| **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** |
### OpenLLM Leaderboard
| **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** |
See the technical report for more results and comparison.
## Evaluation
### Setup
Install the following dependencies:
```bash
# install public lm-eval-harness
harness_repo="public-lm-eval-harness"
git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
cd ${harness_repo}
# use main branch on 03-15-2024, SHA is dc90fec
git checkout dc90fec
pip install -e .
cd ..
# 66d6242 is the main branch on 2024-04-01
pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
```
### Evaluate OpenELM
```bash
# OpenELM-3B
hf_model=apple/OpenELM-3B
# this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
tokenizer=meta-llama/Llama-2-7b-hf
add_bos_token=True
batch_size=1
mkdir lm_eval_output
shot=0
task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=5
task=mmlu,winogrande
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=25
task=arc_challenge,crows_pairs_english
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=10
task=hellaswag
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
```
## Bias, Risks, and Limitations
The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
## Citation
If you find our work useful, please cite:
```BibTex
@article{mehtaOpenELMEfficientLanguage2024,
title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
shorttitle = {{OpenELM}},
url = {https://arxiv.org/abs/2404.14619v1},
language = {en},
urldate = {2024-04-24},
journal = {arXiv.org},
author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
month = apr,
year = {2024},
}
@inproceedings{mehta2022cvnets,
author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
title = {CVNets: High Performance Library for Computer Vision},
year = {2022},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
series = {MM '22}
}
```
|
apple/OpenELM-450M
|
apple
| 2025-02-28T18:31:35Z | 748 | 25 |
transformers
|
[
"transformers",
"safetensors",
"openelm",
"text-generation",
"custom_code",
"arxiv:2404.14619",
"license:apple-amlr",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-04-12T21:48:16Z |
---
license: apple-amlr
license_name: apple-sample-code-license
license_link: LICENSE
---
# OpenELM
*Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. We release the complete framework, encompassing data preparation, training, fine-tuning, and evaluation procedures, alongside multiple pre-trained checkpoints and training logs, to facilitate open research.
Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
## Usage
We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`.
You can try the model by running the following command:
```
python generate_openelm.py --model apple/OpenELM-450M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2
```
Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token.
Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows:
```
python generate_openelm.py --model apple/OpenELM-450M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
```
Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example:
```
python generate_openelm.py --model apple/OpenELM-450M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL]
```
## Main Results
### Zero-Shot
| **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** |
### LLM360
| **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** |
### OpenLLM Leaderboard
| **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** |
See the technical report for more results and comparison.
## Evaluation
### Setup
Install the following dependencies:
```bash
# install public lm-eval-harness
harness_repo="public-lm-eval-harness"
git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
cd ${harness_repo}
# use main branch on 03-15-2024, SHA is dc90fec
git checkout dc90fec
pip install -e .
cd ..
# 66d6242 is the main branch on 2024-04-01
pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
```
### Evaluate OpenELM
```bash
# OpenELM-450M
hf_model=apple/OpenELM-450M
# this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
tokenizer=meta-llama/Llama-2-7b-hf
add_bos_token=True
batch_size=1
mkdir lm_eval_output
shot=0
task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=5
task=mmlu,winogrande
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=25
task=arc_challenge,crows_pairs_english
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=10
task=hellaswag
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
```
## Bias, Risks, and Limitations
The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
## Citation
If you find our work useful, please cite:
```BibTex
@article{mehtaOpenELMEfficientLanguage2024,
title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
shorttitle = {{OpenELM}},
url = {https://arxiv.org/abs/2404.14619v1},
language = {en},
urldate = {2024-04-24},
journal = {arXiv.org},
author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
month = apr,
year = {2024},
}
@inproceedings{mehta2022cvnets,
author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
title = {CVNets: High Performance Library for Computer Vision},
year = {2022},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
series = {MM '22}
}
```
|
apple/OpenELM-270M
|
apple
| 2025-02-28T18:31:34Z | 2,088 | 73 |
transformers
|
[
"transformers",
"safetensors",
"openelm",
"text-generation",
"custom_code",
"arxiv:2404.14619",
"license:apple-amlr",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-04-12T21:42:49Z |
---
license: apple-amlr
license_name: apple-sample-code-license
license_link: LICENSE
---
# OpenELM
*Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. We release the complete framework, encompassing data preparation, training, fine-tuning, and evaluation procedures, alongside multiple pre-trained checkpoints and training logs, to facilitate open research.
Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
## Usage
We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`.
You can try the model by running the following command:
```
python generate_openelm.py --model apple/OpenELM-270M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2
```
Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token.
Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows:
```
python generate_openelm.py --model apple/OpenELM-270M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
```
Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example:
```
python generate_openelm.py --model apple/OpenELM-270M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL]
```
## Main Results
### Zero-Shot
| **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** |
### LLM360
| **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** |
### OpenLLM Leaderboard
| **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** |
See the technical report for more results and comparison.
## Evaluation
### Setup
Install the following dependencies:
```bash
# install public lm-eval-harness
harness_repo="public-lm-eval-harness"
git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
cd ${harness_repo}
# use main branch on 03-15-2024, SHA is dc90fec
git checkout dc90fec
pip install -e .
cd ..
# 66d6242 is the main branch on 2024-04-01
pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
```
### Evaluate OpenELM
```bash
# OpenELM-270M
hf_model=apple/OpenELM-270M
# this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
tokenizer=meta-llama/Llama-2-7b-hf
add_bos_token=True
batch_size=1
mkdir lm_eval_output
shot=0
task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=5
task=mmlu,winogrande
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=25
task=arc_challenge,crows_pairs_english
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=10
task=hellaswag
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
```
## Bias, Risks, and Limitations
The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
## Citation
If you find our work useful, please cite:
```BibTex
@article{mehtaOpenELMEfficientLanguage2024,
title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
shorttitle = {{OpenELM}},
url = {https://arxiv.org/abs/2404.14619v1},
language = {en},
urldate = {2024-04-24},
journal = {arXiv.org},
author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
month = apr,
year = {2024},
}
@inproceedings{mehta2022cvnets,
author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
title = {CVNets: High Performance Library for Computer Vision},
year = {2022},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
series = {MM '22}
}
```
|
apple/OpenELM-450M-Instruct
|
apple
| 2025-02-28T18:31:23Z | 18,192 | 46 |
transformers
|
[
"transformers",
"safetensors",
"openelm",
"text-generation",
"custom_code",
"arxiv:2404.14619",
"license:apple-amlr",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-04-12T21:51:56Z |
---
license: apple-amlr
license_name: apple-sample-code-license
license_link: LICENSE
---
# OpenELM
*Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. We release the complete framework, encompassing data preparation, training, fine-tuning, and evaluation procedures, alongside multiple pre-trained checkpoints and training logs, to facilitate open research.
Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
## Usage
We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`.
You can try the model by running the following command:
```
python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2
```
Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token.
Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows:
```
python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
```
Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example:
```
python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL]
```
## Main Results
### Zero-Shot
| **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** |
### LLM360
| **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** |
### OpenLLM Leaderboard
| **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** |
See the technical report for more results and comparison.
## Evaluation
### Setup
Install the following dependencies:
```bash
# install public lm-eval-harness
harness_repo="public-lm-eval-harness"
git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
cd ${harness_repo}
# use main branch on 03-15-2024, SHA is dc90fec
git checkout dc90fec
pip install -e .
cd ..
# 66d6242 is the main branch on 2024-04-01
pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
```
### Evaluate OpenELM
```bash
# OpenELM-450M-Instruct
hf_model=apple/OpenELM-450M-Instruct
# this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
tokenizer=meta-llama/Llama-2-7b-hf
add_bos_token=True
batch_size=1
mkdir lm_eval_output
shot=0
task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=5
task=mmlu,winogrande
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=25
task=arc_challenge,crows_pairs_english
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=10
task=hellaswag
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
```
## Bias, Risks, and Limitations
The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
## Citation
If you find our work useful, please cite:
```BibTex
@article{mehtaOpenELMEfficientLanguage2024,
title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
shorttitle = {{OpenELM}},
url = {https://arxiv.org/abs/2404.14619v1},
language = {en},
urldate = {2024-04-24},
journal = {arXiv.org},
author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
month = apr,
year = {2024},
}
@inproceedings{mehta2022cvnets,
author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
title = {CVNets: High Performance Library for Computer Vision},
year = {2022},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
series = {MM '22}
}
```
|
apple/AIM-7B
|
apple
| 2025-02-28T18:31:02Z | 220 | 24 |
ml-aim
|
[
"ml-aim",
"pytorch",
"image-classification",
"arxiv:2401.08541",
"license:apple-amlr",
"region:us"
] |
image-classification
| 2024-01-19T09:11:55Z |
---
license: apple-amlr
license_name: apple-sample-code-license
license_link: LICENSE
library_name: ml-aim
pipeline_tag: image-classification
---
# AIM: Autoregressive Image Models
*Alaaeldin El-Nouby, Michal Klein, Shuangfei Zhai, Miguel Angel Bautista, Alexander Toshev, Vaishaal Shankar,
Joshua M Susskind, and Armand Joulin*
This software project accompanies the research paper, [Scalable Pre-training of Large Autoregressive Image Models](https://arxiv.org/abs/2401.08541).
We introduce **AIM** a collection of vision models pre-trained with an autoregressive generative objective.
We show that autoregressive pre-training of image features exhibits similar scaling properties to their
textual counterpart (i.e. Large Language Models). Specifically, we highlight two findings:
1. the model capacity can be trivially scaled to billions of parameters, and
2. AIM effectively leverages large collections of uncurated image data.
## Installation
Please install PyTorch using the official [installation instructions](https://pytorch.org/get-started/locally/).
Afterward, install the package as:
```commandline
pip install git+https://[email protected]/apple/ml-aim.git
```
## Usage
Below we provide an example of loading the model via [HuggingFace Hub](https://huggingface.co/docs/hub/) as:
```python
from PIL import Image
from aim.torch.models import AIMForImageClassification
from aim.torch.data import val_transforms
img = Image.open(...)
model = AIMForImageClassification.from_pretrained("apple/aim-7B")
transform = val_transforms()
inp = transform(img).unsqueeze(0)
logits, features = model(inp)
```
### ImageNet-1k results (frozen trunk)
The table below contains the classification results on ImageNet-1k validation set.
<table style="margin: auto">
<thead>
<tr>
<th rowspan="2">model</th>
<th colspan="2">top-1 IN-1k</th>
</tr>
<tr>
<th>last layer</th>
<th>best layer</th>
</tr>
</thead>
<tbody>
<tr>
<td>AIM-0.6B</td>
<td>78.5%</td>
<td>79.4%</td>
</tr>
<tr>
<td>AIM-1B</td>
<td>80.6%</td>
<td>82.3%</td>
</tr>
<tr>
<td>AIM-3B</td>
<td>82.2%</td>
<td>83.3%</td>
</tr>
<tr>
<td>AIM-7B</td>
<td>82.4%</td>
<td>84.0%</td>
</tr>
</tbody>
</table>
|
apple/AIM-3B
|
apple
| 2025-02-28T18:31:01Z | 23 | 3 |
ml-aim
|
[
"ml-aim",
"pytorch",
"image-classification",
"arxiv:2401.08541",
"license:apple-amlr",
"region:us"
] |
image-classification
| 2024-01-19T09:11:29Z |
---
license: apple-amlr
license_name: apple-sample-code-license
license_link: LICENSE
library_name: ml-aim
pipeline_tag: image-classification
---
# AIM: Autoregressive Image Models
*Alaaeldin El-Nouby, Michal Klein, Shuangfei Zhai, Miguel Angel Bautista, Alexander Toshev, Vaishaal Shankar,
Joshua M Susskind, and Armand Joulin*
This software project accompanies the research paper, [Scalable Pre-training of Large Autoregressive Image Models](https://arxiv.org/abs/2401.08541).
We introduce **AIM** a collection of vision models pre-trained with an autoregressive generative objective.
We show that autoregressive pre-training of image features exhibits similar scaling properties to their
textual counterpart (i.e. Large Language Models). Specifically, we highlight two findings:
1. the model capacity can be trivially scaled to billions of parameters, and
2. AIM effectively leverages large collections of uncurated image data.
## Installation
Please install PyTorch using the official [installation instructions](https://pytorch.org/get-started/locally/).
Afterward, install the package as:
```commandline
pip install git+https://[email protected]/apple/ml-aim.git
```
## Usage
Below we provide an example of loading the model via [HuggingFace Hub](https://huggingface.co/docs/hub/) as:
```python
from PIL import Image
from aim.torch.models import AIMForImageClassification
from aim.torch.data import val_transforms
img = Image.open(...)
model = AIMForImageClassification.from_pretrained("apple/aim-3B")
transform = val_transforms()
inp = transform(img).unsqueeze(0)
logits, features = model(inp)
```
### ImageNet-1k results (frozen trunk)
The table below contains the classification results on ImageNet-1k validation set.
<table style="margin: auto">
<thead>
<tr>
<th rowspan="2">model</th>
<th colspan="2">top-1 IN-1k</th>
</tr>
<tr>
<th>last layer</th>
<th>best layer</th>
</tr>
</thead>
<tbody>
<tr>
<td>AIM-0.6B</td>
<td>78.5%</td>
<td>79.4%</td>
</tr>
<tr>
<td>AIM-1B</td>
<td>80.6%</td>
<td>82.3%</td>
</tr>
<tr>
<td>AIM-3B</td>
<td>82.2%</td>
<td>83.3%</td>
</tr>
<tr>
<td>AIM-7B</td>
<td>82.4%</td>
<td>84.0%</td>
</tr>
</tbody>
</table>
|
maanasharma5/dialect-debiasing-gpt2-medium-pnlogmse-e3-r100000000-n10.0
|
maanasharma5
| 2025-02-28T18:28:43Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt2",
"arxiv:1910.09700",
"base_model:openai-community/gpt2-medium",
"base_model:adapter:openai-community/gpt2-medium",
"region:us"
] | null | 2025-02-28T18:28:40Z |
---
base_model: gpt2-medium
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
maanasharma5/dialect-debiasing-gpt2-medium-pnlogmse-e3-r100000000-n5.0
|
maanasharma5
| 2025-02-28T18:27:52Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt2",
"arxiv:1910.09700",
"base_model:openai-community/gpt2-medium",
"base_model:adapter:openai-community/gpt2-medium",
"region:us"
] | null | 2025-02-28T18:27:50Z |
---
base_model: gpt2-medium
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
priyanynaru/LLaMA3.2-Python-Codegen-Finetune
|
priyanynaru
| 2025-02-28T18:27:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-28T18:24:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
M4rt0no/Gestionabilidad-v3_batch32
|
M4rt0no
| 2025-02-28T18:26:54Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dccuchile/tulio-chilean-spanish-bert",
"base_model:finetune:dccuchile/tulio-chilean-spanish-bert",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-28T18:26:41Z |
---
library_name: transformers
license: cc-by-4.0
base_model: dccuchile/tulio-chilean-spanish-bert
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: Gestionabilidad-v3_batch32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gestionabilidad-v3_batch32
This model is a fine-tuned version of [dccuchile/tulio-chilean-spanish-bert](https://huggingface.co/dccuchile/tulio-chilean-spanish-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1858
- Accuracy: 0.9298
- Precision: 0.9300
- Recall: 0.9298
- F1: 0.9296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.283 | 0.2289 | 500 | 0.2429 | 0.9044 | 0.9072 | 0.9044 | 0.9048 |
| 0.2275 | 0.4579 | 1000 | 0.2073 | 0.9185 | 0.9185 | 0.9185 | 0.9183 |
| 0.2066 | 0.6868 | 1500 | 0.1900 | 0.9187 | 0.9202 | 0.9187 | 0.9181 |
| 0.1949 | 0.9158 | 2000 | 0.2105 | 0.9194 | 0.9213 | 0.9194 | 0.9187 |
| 0.1657 | 1.1447 | 2500 | 0.1920 | 0.9263 | 0.9270 | 0.9263 | 0.9259 |
| 0.1502 | 1.3736 | 3000 | 0.2021 | 0.9280 | 0.9279 | 0.9280 | 0.9279 |
| 0.1412 | 1.6026 | 3500 | 0.1858 | 0.9298 | 0.9300 | 0.9298 | 0.9296 |
| 0.1477 | 1.8315 | 4000 | 0.1950 | 0.9300 | 0.9304 | 0.9300 | 0.9301 |
| 0.1296 | 2.0604 | 4500 | 0.2188 | 0.9303 | 0.9304 | 0.9303 | 0.9304 |
| 0.1004 | 2.2894 | 5000 | 0.2367 | 0.9304 | 0.9305 | 0.9304 | 0.9305 |
| 0.0958 | 2.5183 | 5500 | 0.2294 | 0.9305 | 0.9305 | 0.9305 | 0.9303 |
| 0.1003 | 2.7473 | 6000 | 0.2394 | 0.9293 | 0.9299 | 0.9293 | 0.9290 |
| 0.1029 | 2.9762 | 6500 | 0.2294 | 0.9321 | 0.9320 | 0.9321 | 0.9320 |
| 0.0696 | 3.2051 | 7000 | 0.2727 | 0.9324 | 0.9324 | 0.9324 | 0.9322 |
| 0.0619 | 3.4341 | 7500 | 0.2672 | 0.9287 | 0.9301 | 0.9287 | 0.9289 |
| 0.0627 | 3.6630 | 8000 | 0.2897 | 0.9326 | 0.9329 | 0.9326 | 0.9327 |
| 0.0639 | 3.8919 | 8500 | 0.2970 | 0.9322 | 0.9322 | 0.9322 | 0.9322 |
| 0.0549 | 4.1209 | 9000 | 0.3230 | 0.9321 | 0.9322 | 0.9321 | 0.9321 |
| 0.0409 | 4.3498 | 9500 | 0.3722 | 0.9313 | 0.9317 | 0.9313 | 0.9314 |
| 0.0388 | 4.5788 | 10000 | 0.3326 | 0.9333 | 0.9335 | 0.9333 | 0.9333 |
| 0.0373 | 4.8077 | 10500 | 0.3565 | 0.9332 | 0.9335 | 0.9332 | 0.9333 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
braginpawel/deepseek-14b-dpo-495ex-3ep-5th_iteration
|
braginpawel
| 2025-02-28T18:22:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-28T18:22:42Z |
---
base_model: unsloth/deepseek-r1-distill-qwen-14b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** braginpawel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-14b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SanteriVtj/ppo-SnowballTarget
|
SanteriVtj
| 2025-02-28T18:22:37Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-02-28T18:22:34Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SanteriVtj/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
mradermacher/Finetuning_T5_HealthCare_Chatbot-GGUF
|
mradermacher
| 2025-02-28T18:21:46Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2025-02-28T18:21:10Z |
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ahmed792002/Finetuning_T5_HealthCare_Chatbot
|
TongZheng1999/Qwen2.5-7B-Instruct-star-code-3Rounds-iter-1
|
TongZheng1999
| 2025-02-28T18:21:17Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-28T18:09:19Z |
---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: Qwen2.5-7B-Instruct-star-code-3Rounds-iter-1
tags:
- generated_from_trainer
- alignment-handbook
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-7B-Instruct-star-code-3Rounds-iter-1
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="TongZheng1999/Qwen2.5-7B-Instruct-star-code-3Rounds-iter-1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kidzheng/huggingface/runs/byj1act3)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Sophie-Rain-Spider-Man-New-Sex/Sophie.Rain.SpiderMan.Videos.VIRAL.Sophie.Rain.Spider.Man.Video.Tutorial
|
Sophie-Rain-Spider-Man-New-Sex
| 2025-02-28T18:20:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-28T18:20:01Z |
<p><a href="https://t.co/yLt2Ar1EVv">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a href="https://t.co/yLt2Ar1EVv">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p>
|
rse-mfm/whisper-small-hi-2
|
rse-mfm
| 2025-02-28T18:18:44Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-02-28T14:01:36Z |
---
library_name: transformers
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: None
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 32.4938626936426
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4418
- Wer: 32.4939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0918 | 2.4450 | 1000 | 0.2989 | 35.1689 |
| 0.0197 | 4.8900 | 2000 | 0.3579 | 33.9203 |
| 0.0014 | 7.3350 | 3000 | 0.4170 | 32.6632 |
| 0.0005 | 9.7800 | 4000 | 0.4418 | 32.4939 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Sophie-Rain-SpiderMan-Viral-Leaked-Link/Sophie.Rain.Spider-Man.Video.Twitter
|
Sophie-Rain-SpiderMan-Viral-Leaked-Link
| 2025-02-28T18:18:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-28T18:18:16Z |
<p><a href="https://t.co/yLt2Ar1EVv">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a href="https://t.co/yLt2Ar1EVv">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p>
|
Sophie-Rain-Spider-video/Sophie.Rain.Spiderman.Video.viral.leak
|
Sophie-Rain-Spider-video
| 2025-02-28T18:18:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-28T18:17:54Z |
<p><a href="https://t.co/yLt2Ar1EVv">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a href="https://t.co/yLt2Ar1EVv">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p>
|
Elcaida/pretrainnnn
|
Elcaida
| 2025-02-28T18:17:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-28T18:17:11Z |
---
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Elcaida
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sophie-Rain-Leak-New-Videos/Sophie.Rain.Leaks.Video.Free
|
Sophie-Rain-Leak-New-Videos
| 2025-02-28T18:15:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-28T18:15:31Z |
<p><a href="https://t.co/yLt2Ar1EVv">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a href="https://t.co/yLt2Ar1EVv">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p>
|
YMEA/pathe_tts-ln-V0.1
|
YMEA
| 2025-02-28T18:15:31Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2025-02-28T15:57:14Z |
---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- audiofolder
model-index:
- name: pathe_tts-ln-V0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pathe_tts-ln-V0.1
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 6000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:--------:|:----:|:---------------:|
| 0.6645 | 17.0716 | 700 | 0.5002 |
| 0.499 | 34.1433 | 1400 | 0.4979 |
| 0.4651 | 51.2149 | 2100 | 0.4962 |
| 0.4486 | 68.2866 | 2800 | 0.5084 |
| 0.4364 | 85.3582 | 3500 | 0.5186 |
| 0.4305 | 102.4299 | 4200 | 0.5038 |
| 0.4244 | 119.5015 | 4900 | 0.5227 |
| 0.4208 | 136.5731 | 5600 | 0.5275 |
### Framework versions
- Transformers 4.50.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
TareksLab/TEST-LLaMa-70B
|
TareksLab
| 2025-02-28T18:14:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:merge:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:Sao10K/L3.1-70B-Hanami-x1",
"base_model:merge:Sao10K/L3.1-70B-Hanami-x1",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TareksLab/UL3.3-FUSION-BASE-70B",
"base_model:merge:TareksLab/UL3.3-FUSION-BASE-70B",
"base_model:TheDrummer/Anubis-70B-v1",
"base_model:merge:TheDrummer/Anubis-70B-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-28T16:51:24Z |
---
base_model:
- TareksLab/UL3.3-FUSION-BASE-70B
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- TheDrummer/Anubis-70B-v1
- Sao10K/L3.1-70B-Hanami-x1
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
- SicariusSicariiStuff/Negative_LLAMA_70B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using [TareksLab/UL3.3-FUSION-BASE-70B](https://huggingface.co/TareksLab/UL3.3-FUSION-BASE-70B) as a base.
### Models Merged
The following models were included in the merge:
* [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1)
* [TheDrummer/Anubis-70B-v1](https://huggingface.co/TheDrummer/Anubis-70B-v1)
* [Sao10K/L3.1-70B-Hanami-x1](https://huggingface.co/Sao10K/L3.1-70B-Hanami-x1)
* [LatitudeGames/Wayfarer-Large-70B-Llama-3.3](https://huggingface.co/LatitudeGames/Wayfarer-Large-70B-Llama-3.3)
* [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Sao10K/L3.1-70B-Hanami-x1
parameters:
weight: 0.20
density: 0.7
- model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
parameters:
weight: 0.20
density: 0.7
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
weight: 0.20
density: 0.7
- model: TheDrummer/Anubis-70B-v1
parameters:
weight: 0.20
density: 0.7
- model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
parameters:
weight: 0.20
density: 0.7
merge_method: della_linear
base_model: TareksLab/UL3.3-FUSION-BASE-70B
parameters:
epsilon: 0.2
lambda: 1.1
dtype: bfloat16
tokenizer:
source: TareksLab/UL3.3-FUSION-BASE-70B
```
|
mradermacher/phi-2-mental_health-GGUF
|
mradermacher
| 2025-02-28T18:13:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-28T18:13:40Z |
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/kasper52786/phi-2-mental_health
|
nikhatbegum/english-telugu-colloquial-translator
|
nikhatbegum
| 2025-02-28T18:12:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text-generation",
"generated_from_trainer",
"base_model:harshitha2406/English_to_Telugu",
"base_model:finetune:harshitha2406/English_to_Telugu",
"license:cc0-1.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T20:16:10Z |
---
library_name: transformers
license: cc0-1.0
base_model: harshitha2406/English_to_Telugu
tags:
- generated_from_trainer
model-index:
- name: english-telugu-colloquial-translator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# english-telugu-colloquial-translator
This model is a fine-tuned version of [harshitha2406/English_to_Telugu](https://huggingface.co/harshitha2406/English_to_Telugu) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.1762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 13.0948 | 2.0 | 2 | 13.0171 |
| 13.0165 | 4.0 | 4 | 13.0171 |
| 12.1123 | 6.0 | 6 | 11.3898 |
| 10.3502 | 8.0 | 8 | 9.4103 |
| 8.7401 | 10.0 | 10 | 8.1762 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.