modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-25 12:29:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 495
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-25 12:27:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
rmezapi/dementia-vit | rmezapi | 2025-03-10T17:40:33Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"dementia",
"en",
"dataset:Falah/Alzheimer_MRI",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-03-10T17:30:55Z | ---
datasets:
- Falah/Alzheimer_MRI
base_model:
- google/vit-base-patch16-224-in21k
pipeline_tag: image-classification
tags:
- dementia
license: mit
language:
- en
library_name: transformers
---
This project was intended to test the limits of the ViT on a tough dementia dataset. The data used can be found on HuggingFace at: https://huggingface.co/datasets/Falah/Alzheimer_MRI. The project follows closely the following tutorials:
https://www.youtube.com/watch?v=r88L_yLJ4CE&ab_channel=code_your_own_AI
https://www.youtube.com/watch?v=qU7wO02urYU&ab_channel=JamesBriggs
I modify the code presented in the video and tune all parameters to optimize performance using mostly the same libraries and tools. This is a practice project for myself as I return to coding/designing ML models after dedicating time to AI/ML theory (model architectures, transfer learning)



 |
TensorStack/AirtistPhoto-amuse | TensorStack | 2025-03-10T17:40:32Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-03-10T17:37:29Z | # AIrtist Photo MAL Realistic - Onnx DirectML Optimized
## Original Model
https://civitai.com/models/229332/airtist-photo-mal-realistic
## Amuse
https://www.amuse-ai.com/ |
TensorStack/AbsoluteReality_v181-amuse | TensorStack | 2025-03-10T17:36:52Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-03-10T17:31:51Z | # AbsoluteReality v1.8.1 - Onnx DirectML Optimized
## Original Model
https://civitai.com/models/81458/absolutereality?modelVersionId=132760
## Amuse
https://www.amuse-ai.com/ |
Lettria/grag-go-idf-online_contrastive_8082-v2-trial-5 | Lettria | 2025-03-10T17:36:30Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"tensorboard",
"onnx",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4861",
"loss:OnlineContrastiveLoss",
"arxiv:1908.10084",
"base_model:intfloat/multilingual-e5-base",
"base_model:quantized:intfloat/multilingual-e5-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-03-10T17:35:22Z | ---
base_model: intfloat/multilingual-e5-base
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- cosine_mcc
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4861
- loss:OnlineContrastiveLoss
widget:
- source_sentence: 'Type de project: Actions de valorisation (expos physiques ou virtuelles,
journées d’étude, site internet, publications, documentaires…),Outils de médiation (cartes
et itinéraires papier ou numériques, livrets de visite, outils numériques, multimédia,
parcours d’interprétation…),Dispositifs pédagogiques (mallettes pédagogiques,
Moocs, supports de visite à destination des jeunes…),Événements rayonnant à l’échelle
de l’Île-de-France. Une attention particulière sera portée à la qualité des contenus,
à l’originalité et la pertinence des outils ou actions proposés, et à leur adéquation
avec les publics ciblés.'
sentences:
- '''Actions de valorisation'':projet|ÉVALUÉ_PAR|''adéquation avec les publics ciblés'':critère'
- '''mesdemarches.iledefrance.fr'':plateforme|ACCEPTE_DEMANDE|''Association - Fondation'':entité'
- '''projets de coopération'':projet|IMPLIQUE|''agriculteur cédant'':personne'
- source_sentence: 'Description: Cet appel à projets vise à soutenir les structures
en investissement qui agissent en faveur des jeunes en situation de précarité,
suite à une rupture familiale ou sociale pouvant entraîner de graves conséquences
sur leur santé ou leur sécurité.
Thèmes: Santé & Social : Solidarité
Nature de l''aide: Les dépenses éligibles se composent de dépenses de fonctionnement
exclusivement imputables à la mise en œuvre des projets retenus dans le cadre
de ce dispositif. La subvention régionale est fixée à 50 % maximum de la dépense
subventionnable (total des dépenses éligibles), dans la limite d’un plafond de
subvention fixé à 75 000 € maximum.
Délibération cadre: CR 100-16 du 22 septembre 2016 / CP 2018-428 du 17 octobre
2018'
sentences:
- '''C''POSSIBLE'':programme|FAVORISE_INSERTION_PROFESSIONNELLE|''lycéens'':groupe'
- '''Date de début'':concept|EST|''non précisée'':__inferred__'
- '''subvention régionale'':aide|LIMITE|''appel à projets'':projet'
- source_sentence: 'Type de project: Le programme propose des rencontres le samedi
après-midi dans une université ou une grande école réputée, entre les professionnels
bénévoles et les lycéens et collégiens sous la forme d''atelier thématiques. Ces
moments de rencontre touchent à une grande multitude de domaines d’activités. L''objectif
est de donner l’opportunité aux jeunes les plus enclavés d’échanger avec des intervenants
professionnels aux parcours atypiques et inspirants. Les intervenants suscitent
les ambitions et élargissent les perspectives des élèves.'
sentences:
- '''concours'':événement|CIBLE|''jeunes'':groupe'
- '''projets'':__inferred__|TÉLÉCHARGER_ET_REMPLIR|''charte des valeurs de la République
et de la laïcité'':document'
- '''programme'':initiative|IMPLIQUE|''lycéens'':groupe'
- source_sentence: 'Type de project: Le Prix des Innovateurs vise à encourager, soutenir
et valoriser la recherche, le transfert de technologie et l’émergence d’innovations
en santé dont l’impact sociétal et de santé publique est remarquable. Ce prix
a ainsi vocation à : Contribuer à la reconnaissance d’un chercheur et de son
équipe menant des recherches dans le secteur de la santé,Encourager la création
de spin-off de laboratoires académiques en garantissant les meilleures conditions
d’essaimage notamment par l’acquisition des compétences requises par l’ensemble
des membres de l’équipe,Renforcer'
sentences:
- '''2nde session de dépôt'':session|diffusion prévue|''diffusion à partir de novembre
2025'':__inferred__'
- '''chercheur'':personne|DIRIGE|''équipe de recherche'':groupe'
- '''Collectivité ou institution - Communes de > 20 000 hab'':organisation|éligible
pour|''dépôt des demandes de subvention'':procédure'
- source_sentence: 'Date de début: non précisée
Date de fin (clôture): non précisée
Date de début de la future campagne: non précisée'
sentences:
- '''Date de fin'':concept|EST|''Lundi 18 Novembre 2024'':__inferred__'
- '''Région IDF'':organisation|PROPOSE|''Grands Lieux d''Innovation'':programme'
- '''Date de fin'':concept|EST|''non précisée'':__inferred__'
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-base
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: BinaryClassifEval
type: BinaryClassifEval
metrics:
- type: cosine_accuracy
value: 0.723911257189811
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.7624340653419495
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.7310344827586207
name: Cosine F1
- type: cosine_f1_threshold
value: 0.7544324398040771
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.6923076923076923
name: Cosine Precision
- type: cosine_recall
value: 0.7743506493506493
name: Cosine Recall
- type: cosine_ap
value: 0.7896511495283742
name: Cosine Ap
- type: cosine_mcc
value: 0.42531138116583356
name: Cosine Mcc
---
# SentenceTransformer based on intfloat/multilingual-e5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) <!-- at revision 835193815a3936a24a0ee7dc9e3d48c1fbb19c55 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Lettria/grag-go-idf-online_contrastive_8082-v2-trial-5")
# Run inference
sentences = [
'Date de début: non précisée\nDate de fin (clôture): non précisée\nDate de début de la future campagne: non précisée',
"'Date de fin':concept|EST|'non précisée':__inferred__",
"'Date de fin':concept|EST|'Lundi 18 Novembre 2024':__inferred__",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Dataset: `BinaryClassifEval`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:--------------------------|:-----------|
| cosine_accuracy | 0.7239 |
| cosine_accuracy_threshold | 0.7624 |
| cosine_f1 | 0.731 |
| cosine_f1_threshold | 0.7544 |
| cosine_precision | 0.6923 |
| cosine_recall | 0.7744 |
| **cosine_ap** | **0.7897** |
| cosine_mcc | 0.4253 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 4,861 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------|
| type | string | string | int |
| details | <ul><li>min: 26 tokens</li><li>mean: 191.64 tokens</li><li>max: 429 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 31.2 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>1: 100.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Type de project: L’excès de précipitations tout au long de l’année a conduit à une chute spectaculaire des rendements des céréales d’été et des protéagineux (blé, orge, pois, féverole, etc.) que produisent 90% des agriculteurs d’Île-de-France, historique grenier à blé du pays. Tributaires naturels du fleurissement des cultures, les apiculteurs professionnels de la région ont également souffert de ces dérèglements climatiques.La Région accompagne les exploitations concernées en leur apportant une aide exceptionnelle.</code> | <code>'excès de précipitations':phénomène|DIMINUE|'rendements des protéagineux':concept</code> | <code>1</code> |
| <code>Type de project: Dans le cadre de sa stratégie « Impact 2028 », la Région s’engage dans la défense de la souveraineté industrielle en renforçant son soutien à une industrie circulaire et décarbonée, porteuse d’innovations et créatrice d’emplois. PM'up Jeunes pousses industrielles soutient les projets d’implantation d’une première usine tournée vers la décarbonation, l’efficacité énergétique et la circularité des processus de production. Ces projets peuvent prendre l'une de ces formes : Une première unité de production industrielle, après une phase de prototypage,Une ligne pilote de production industrielle, en interne ou chez un tiers situé en Île-de-France, à condition que sa production soit destinée à de premières commercialisations,La transformation d’une unité de production pilote à une unité de production industrielle</code> | <code>'Région Île-de-France':organisation|soutient|'industrie décarbonée':concept</code> | <code>1</code> |
| <code>Procédures et démarches: Le dépôt des demandes de subvention se fait en ligne sur la plateforme régionale mesdemarches.iledefrance.fr : Session de dépôt unique pour les nouvelles demandes : du 30 septembre au 4 novembre 2024 (11 heures) pour des festivals qui se déroulent entre le 1er mars 2025 et le 28 février 2026 (vote à la CP de mars 2025). Pour les demandes de renouvellement, un mail est envoyé aux structures concernées par le service du Spectacle vivant en amont de chaque session de dépôt.<br>Bénéficiaires: Professionnel - Culture, Association - Fondation, Association - Régie par la loi de 1901, Association - ONG, Collectivité ou institution - Communes de 10 000 à 20 000 hab, Collectivité ou institution - Autre (GIP, copropriété, EPA...), Collectivité ou institution - Communes de 2000 à 10 000 hab, Collectivité ou institution - Communes de < 2000 hab, Collectivité ou institution - Communes de > 20 000 hab, Collectivité ou institution - Département, Collectivité ou institution - EPC...</code> | <code>'Collectivité ou institution - EPCI':bénéficiaire|PEUT_BÉNÉFICIER|'demandes de subvention':procédure</code> | <code>1</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Evaluation Dataset
#### json
* Dataset: json
* Size: 1,217 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 24 tokens</li><li>mean: 188.47 tokens</li><li>max: 394 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 31.22 tokens</li><li>max: 133 tokens</li></ul> | <ul><li>0: ~38.40%</li><li>1: ~61.60%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------|
| <code>Type de project: Le programme propose des rencontres le samedi après-midi dans une université ou une grande école réputée, entre les professionnels bénévoles et les lycéens et collégiens sous la forme d'atelier thématiques. Ces moments de rencontre touchent à une grande multitude de domaines d’activités. L'objectif est de donner l’opportunité aux jeunes les plus enclavés d’échanger avec des intervenants professionnels aux parcours atypiques et inspirants. Les intervenants suscitent les ambitions et élargissent les perspectives des élèves.</code> | <code>'rencontres':événement|impliquent|'professionnels bénévoles':groupe</code> | <code>1</code> |
| <code>Précision sure les bénéficiaires: Communes,Établissements publics de coopération intercommunale (avec ou sans fiscalité propre),Établissements publics territoriaux franciliens,Départements,Aménageurs publics et privés (lorsque ces derniers interviennent à la demande ou pour le compte d'une collectivité précitée).</code> | <code>'Aménageurs privés':entité|INTERVIENT_POUR|'Départements':entité</code> | <code>1</code> |
| <code>Date de début: non précisée<br>Date de fin (clôture): non précisée<br>Date de début de la future campagne: non précisée</code> | <code>'Date de fin':concept|EST|'non précisée':__inferred__</code> | <code>1</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `gradient_accumulation_steps`: 4
- `learning_rate`: 4.8482667652196246e-05
- `num_train_epochs`: 10
- `lr_scheduler_type`: cosine
- `warmup_steps`: 191
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `hub_model_id`: Lettria/grag-go-idf-online_contrastive_8082-v2-trial-5
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 4.8482667652196246e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 191
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: Lettria/grag-go-idf-online_contrastive_8082-v2-trial-5
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | BinaryClassifEval_cosine_ap |
|:-------:|:-------:|:-------------:|:---------------:|:---------------------------:|
| 0.1316 | 40 | 0.4716 | - | - |
| 0.2632 | 80 | 0.3705 | - | - |
| 0.3947 | 120 | 0.406 | - | - |
| 0.5263 | 160 | 0.3677 | - | - |
| 0.6579 | 200 | 0.39 | - | - |
| 0.7895 | 240 | 0.3813 | - | - |
| 0.9211 | 280 | 0.3815 | - | - |
| **1.0** | **304** | **-** | **0.114** | **0.7897** |
| 1.0526 | 320 | 0.3434 | - | - |
| 1.1842 | 360 | 0.3049 | - | - |
| 1.3158 | 400 | 0.3214 | - | - |
| 1.4474 | 440 | 0.3269 | - | - |
| 1.5789 | 480 | 0.2828 | - | - |
| 1.7105 | 520 | 0.2726 | - | - |
| 1.8421 | 560 | 0.3099 | - | - |
| 1.9737 | 600 | 0.2944 | - | - |
| 2.0 | 608 | - | 0.1362 | 0.7456 |
| 2.1053 | 640 | 0.2928 | - | - |
| 2.2368 | 680 | 0.2382 | - | - |
| 2.3684 | 720 | 0.2369 | - | - |
| 2.5 | 760 | 0.2086 | - | - |
| 2.6316 | 800 | 0.2401 | - | - |
| 2.7632 | 840 | 0.218 | - | - |
| 2.8947 | 880 | 0.1988 | - | - |
| 3.0 | 912 | - | 0.1510 | 0.7015 |
| 3.0263 | 920 | 0.199 | - | - |
| 3.1579 | 960 | 0.194 | - | - |
| 3.2895 | 1000 | 0.1726 | - | - |
| 3.4211 | 1040 | 0.1504 | - | - |
| 3.5526 | 1080 | 0.1782 | - | - |
| 3.6842 | 1120 | 0.1869 | - | - |
| 3.8158 | 1160 | 0.1624 | - | - |
| 3.9474 | 1200 | 0.149 | - | - |
| 4.0 | 1216 | - | 0.1467 | 0.7468 |
| 4.0789 | 1240 | 0.1431 | - | - |
| 4.2105 | 1280 | 0.1492 | - | - |
| 4.3421 | 1320 | 0.1345 | - | - |
| 4.4737 | 1360 | 0.1251 | - | - |
| 4.6053 | 1400 | 0.1032 | - | - |
| 4.7368 | 1440 | 0.0979 | - | - |
| 4.8684 | 1480 | 0.1369 | - | - |
| 5.0 | 1520 | 0.1013 | 0.1706 | 0.6860 |
| 5.1316 | 1560 | 0.1015 | - | - |
| 5.2632 | 1600 | 0.0871 | - | - |
| 5.3947 | 1640 | 0.0717 | - | - |
| 5.5263 | 1680 | 0.0912 | - | - |
| 5.6579 | 1720 | 0.0786 | - | - |
| 5.7895 | 1760 | 0.0891 | - | - |
| 5.9211 | 1800 | 0.0866 | - | - |
| 6.0 | 1824 | - | 0.1822 | 0.6957 |
| 6.0526 | 1840 | 0.0692 | - | - |
| 6.1842 | 1880 | 0.0543 | - | - |
| 6.3158 | 1920 | 0.0528 | - | - |
| 6.4474 | 1960 | 0.0644 | - | - |
| 6.5789 | 2000 | 0.084 | - | - |
| 6.7105 | 2040 | 0.0511 | - | - |
| 6.8421 | 2080 | 0.0544 | - | - |
| 6.9737 | 2120 | 0.0675 | - | - |
| 7.0 | 2128 | - | 0.1909 | 0.6784 |
| 7.1053 | 2160 | 0.0351 | - | - |
| 7.2368 | 2200 | 0.0492 | - | - |
| 7.3684 | 2240 | 0.04 | - | - |
| 7.5 | 2280 | 0.0606 | - | - |
| 7.6316 | 2320 | 0.0509 | - | - |
| 7.7632 | 2360 | 0.0397 | - | - |
| 7.8947 | 2400 | 0.0412 | - | - |
| 8.0 | 2432 | - | 0.1983 | 0.6886 |
| 8.0263 | 2440 | 0.0541 | - | - |
| 8.1579 | 2480 | 0.0302 | - | - |
| 8.2895 | 2520 | 0.0494 | - | - |
| 8.4211 | 2560 | 0.0286 | - | - |
| 8.5526 | 2600 | 0.0327 | - | - |
| 8.6842 | 2640 | 0.0378 | - | - |
| 8.8158 | 2680 | 0.037 | - | - |
| 8.9474 | 2720 | 0.0473 | - | - |
| 9.0 | 2736 | - | 0.2056 | 0.6887 |
| 9.0789 | 2760 | 0.0342 | - | - |
| 9.2105 | 2800 | 0.0251 | - | - |
| 9.3421 | 2840 | 0.0294 | - | - |
| 9.4737 | 2880 | 0.0346 | - | - |
| 9.6053 | 2920 | 0.0313 | - | - |
| 9.7368 | 2960 | 0.0288 | - | - |
| 9.8684 | 3000 | 0.039 | - | - |
| 10.0 | 3040 | 0.0426 | 0.1140 | 0.7897 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.9
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.3.0
- Accelerate: 1.1.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
alex17cmbs/dqn-SpaceInvadersNoFrameskip-v4 | alex17cmbs | 2025-03-10T17:36:30Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-03-10T17:35:51Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 781.50 +/- 214.80
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alex17cmbs -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alex17cmbs -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga alex17cmbs
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 50000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
maisarp/llama-3.1-8B-texto-para-sql | maisarp | 2025-03-10T17:35:31Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-10T17:34:14Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** maisarp
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
teo2003zz/Meta-Llama-3.1-8B-Instruct-Second-Brain-Summariztion | teo2003zz | 2025-03-10T17:34:32Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-10T17:23:06Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** teo2003zz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlx-community/miscii-14b-0218-4bit | mlx-community | 2025-03-10T17:32:19Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"zh",
"base_model:sthenno-com/miscii-14b-0218",
"base_model:quantized:sthenno-com/miscii-14b-0218",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-03-10T17:31:47Z | ---
language:
- en
- zh
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
- mlx
- mlx-my-repo
base_model: sthenno-com/miscii-14b-0218
metrics:
- accuracy
model-index:
- name: miscii-14b-0218
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 76.56
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 50.64
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 51.44
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 17.79
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.21
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.75
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno-com/miscii-14b-0218
name: Open LLM Leaderboard
---
# sthenno/miscii-14b-0218-4bit
The Model [sthenno/miscii-14b-0218-4bit](https://huggingface.co/sthenno/miscii-14b-0218-4bit) was converted to MLX format from [sthenno-com/miscii-14b-0218](https://huggingface.co/sthenno-com/miscii-14b-0218) using mlx-lm version **0.21.5**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("sthenno/miscii-14b-0218-4bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
HumanoidTeam/binary_cube_rainbow_local_processing_joint_fixes_40k | HumanoidTeam | 2025-03-10T17:30:47Z | 0 | 0 | null | [
"safetensors",
"dataset:HumanoidTeam/rby_binary_cube_v5",
"region:us"
] | null | 2025-03-10T16:57:10Z | ---
datasets:
- HumanoidTeam/rby_binary_cube_v5
--- |
shibajustfor/a9cf8dfd-5a10-44ec-8d79-d8f489f57cca | shibajustfor | 2025-03-10T17:30:36Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-mistral",
"base_model:adapter:echarlaix/tiny-random-mistral",
"region:us"
] | null | 2025-03-10T17:30:31Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: echarlaix/tiny-random-mistral
model-index:
- name: shibajustfor/a9cf8dfd-5a10-44ec-8d79-d8f489f57cca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shibajustfor/a9cf8dfd-5a10-44ec-8d79-d8f489f57cca
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.2217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
robiulawaldev/06c4b1f4-b31d-404f-9265-a839e13373d4 | robiulawaldev | 2025-03-10T17:29:55Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:tiiuae/falcon-rw-1b",
"base_model:adapter:tiiuae/falcon-rw-1b",
"region:us"
] | null | 2025-03-10T17:29:42Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: tiiuae/falcon-rw-1b
model-index:
- name: robiulawaldev/06c4b1f4-b31d-404f-9265-a839e13373d4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robiulawaldev/06c4b1f4-b31d-404f-9265-a839e13373d4
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/First_DeepSeek-R1-Medical-COT-GGUF | mradermacher | 2025-03-10T17:29:25Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:rk2903/First_DeepSeek-R1-Medical-COT",
"base_model:quantized:rk2903/First_DeepSeek-R1-Medical-COT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-10T15:35:30Z | ---
base_model: rk2903/First_DeepSeek-R1-Medical-COT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/rk2903/First_DeepSeek-R1-Medical-COT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/First_DeepSeek-R1-Medical-COT-GGUF/resolve/main/First_DeepSeek-R1-Medical-COT.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/First_DeepSeek-R1-Medical-COT-GGUF/resolve/main/First_DeepSeek-R1-Medical-COT.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/First_DeepSeek-R1-Medical-COT-GGUF/resolve/main/First_DeepSeek-R1-Medical-COT.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/First_DeepSeek-R1-Medical-COT-GGUF/resolve/main/First_DeepSeek-R1-Medical-COT.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/First_DeepSeek-R1-Medical-COT-GGUF/resolve/main/First_DeepSeek-R1-Medical-COT.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/First_DeepSeek-R1-Medical-COT-GGUF/resolve/main/First_DeepSeek-R1-Medical-COT.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/First_DeepSeek-R1-Medical-COT-GGUF/resolve/main/First_DeepSeek-R1-Medical-COT.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/First_DeepSeek-R1-Medical-COT-GGUF/resolve/main/First_DeepSeek-R1-Medical-COT.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/First_DeepSeek-R1-Medical-COT-GGUF/resolve/main/First_DeepSeek-R1-Medical-COT.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/First_DeepSeek-R1-Medical-COT-GGUF/resolve/main/First_DeepSeek-R1-Medical-COT.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/First_DeepSeek-R1-Medical-COT-GGUF/resolve/main/First_DeepSeek-R1-Medical-COT.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/First_DeepSeek-R1-Medical-COT-GGUF/resolve/main/First_DeepSeek-R1-Medical-COT.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/PerpetualNight-12B-GGUF | mradermacher | 2025-03-10T17:29:20Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"chatml",
"slerp",
"en",
"ja",
"base_model:yamatazen/PerpetualNight-12B",
"base_model:quantized:yamatazen/PerpetualNight-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-10T16:02:02Z | ---
base_model: yamatazen/PerpetualNight-12B
language:
- en
- ja
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- chatml
- slerp
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/yamatazen/PerpetualNight-12B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/PerpetualNight-12B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PerpetualNight-12B-GGUF/resolve/main/PerpetualNight-12B.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/PerpetualNight-12B-GGUF/resolve/main/PerpetualNight-12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/PerpetualNight-12B-GGUF/resolve/main/PerpetualNight-12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PerpetualNight-12B-GGUF/resolve/main/PerpetualNight-12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/PerpetualNight-12B-GGUF/resolve/main/PerpetualNight-12B.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/PerpetualNight-12B-GGUF/resolve/main/PerpetualNight-12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PerpetualNight-12B-GGUF/resolve/main/PerpetualNight-12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PerpetualNight-12B-GGUF/resolve/main/PerpetualNight-12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/PerpetualNight-12B-GGUF/resolve/main/PerpetualNight-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/PerpetualNight-12B-GGUF/resolve/main/PerpetualNight-12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PerpetualNight-12B-GGUF/resolve/main/PerpetualNight-12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
baby-dev/85c544a8-5db1-4394-a425-5a5fa9c64d67 | baby-dev | 2025-03-10T17:28:51Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-mistral",
"base_model:adapter:echarlaix/tiny-random-mistral",
"region:us"
] | null | 2025-03-10T17:28:47Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: echarlaix/tiny-random-mistral
model-index:
- name: baby-dev/85c544a8-5db1-4394-a425-5a5fa9c64d67
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baby-dev/85c544a8-5db1-4394-a425-5a5fa9c64d67
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.2223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
cutycat2000x/void-1-32b-Q4_K_M-GGUF | cutycat2000x | 2025-03-10T17:25:26Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"de",
"base_model:voidai-team/void-1-32b",
"base_model:quantized:voidai-team/void-1-32b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-03-10T17:23:53Z | ---
base_model: voidai-team/void-1-32b
language:
- en
- de
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# cutycat2000x/void-1-32b-Q4_K_M-GGUF
This model was converted to GGUF format from [`voidai-team/void-1-32b`](https://huggingface.co/voidai-team/void-1-32b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/voidai-team/void-1-32b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo cutycat2000x/void-1-32b-Q4_K_M-GGUF --hf-file void-1-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo cutycat2000x/void-1-32b-Q4_K_M-GGUF --hf-file void-1-32b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo cutycat2000x/void-1-32b-Q4_K_M-GGUF --hf-file void-1-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo cutycat2000x/void-1-32b-Q4_K_M-GGUF --hf-file void-1-32b-q4_k_m.gguf -c 2048
```
|
gokulan006/Suicidal-Risk-Analysis-distilbert-base-uncased | gokulan006 | 2025-03-10T17:25:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:gokulan006/Suicidal-Risk-Analysis-distilbert-base-uncased",
"base_model:finetune:gokulan006/Suicidal-Risk-Analysis-distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-10T10:50:49Z | ---
library_name: transformers
license: apache-2.0
base_model: gokulan006/Suicidal-Risk-Analysis-distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Suicidal-Risk-Analysis-DistilBert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Suicidal-Risk-Analysis-DistilBert-base-uncased
This model is a fine-tuned version of [gokulan006/Suicidal-Risk-Analysis-distilbert-base-uncased](https://huggingface.co/gokulan006/Suicidal-Risk-Analysis-distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5168
- Accuracy: 0.8384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4205 | 1.0 | 659 | 0.3784 | 0.8346 |
| 0.2791 | 2.0 | 1318 | 0.3904 | 0.8456 |
| 0.1506 | 3.0 | 1977 | 0.5168 | 0.8384 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Aldey/cahya-distilbert-base-indonesian-smsa | Aldey | 2025-03-10T17:25:09Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:cahya/distilbert-base-indonesian",
"base_model:finetune:cahya/distilbert-base-indonesian",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-10T12:58:11Z | ---
library_name: transformers
license: mit
base_model: cahya/distilbert-base-indonesian
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: cahya-distilbert-base-indonesian-smsa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cahya-distilbert-base-indonesian-smsa
This model is a fine-tuned version of [cahya/distilbert-base-indonesian](https://huggingface.co/cahya/distilbert-base-indonesian) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2808
- Accuracy: 0.9514
- F1 Score: 0.9517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|
| 0.3161 | 0.5359 | 500 | 0.2078 | 0.9470 | 0.9468 |
| 0.1997 | 1.0718 | 1000 | 0.2636 | 0.9382 | 0.9384 |
| 0.1303 | 1.6077 | 1500 | 0.1916 | 0.9558 | 0.9560 |
| 0.1184 | 2.1436 | 2000 | 0.2312 | 0.9523 | 0.9526 |
| 0.0609 | 2.6795 | 2500 | 0.2396 | 0.9532 | 0.9534 |
| 0.055 | 3.2154 | 3000 | 0.2428 | 0.9488 | 0.9489 |
| 0.0178 | 3.7513 | 3500 | 0.2742 | 0.9549 | 0.9551 |
| 0.0198 | 4.2872 | 4000 | 0.2679 | 0.9549 | 0.9551 |
| 0.0129 | 4.8232 | 4500 | 0.2762 | 0.9514 | 0.9516 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.3.2
- Tokenizers 0.21.0
|
CarsenGafford2/ppo-LunarLander-v2 | CarsenGafford2 | 2025-03-10T17:23:33Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-03-10T17:23:15Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.95 +/- 16.35
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
1kpa/1kpa | 1kpa | 2025-03-10T17:21:42Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-10T16:38:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# 1Kpa
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('1kpa/1kpa', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
omnimodel/nexus-o-v8-update | omnimodel | 2025-03-10T17:21:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"hithink_omni",
"feature-extraction",
"custom_code",
"en",
"zh",
"arxiv:2503.01879",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"region:us"
] | feature-extraction | 2025-03-10T17:13:48Z | ---
library_name: transformers
license: apache-2.0
language:
- en
- zh
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
v8: This is the model checkpoint used in the report (v1).
## Model Details
### Model Description
NEXUS-O: AN OMNI-PERCEPTIVE AND -INTERACTIVE MODEL FOR LANGUAGE, AUDIO, AND VISION
### Model Sources
- **Repository:** [More Information Needed]
- **Paper:** https://huggingface.co/papers/2503.01879
- **Demo:** [More Information Needed]
## How to Get Started with the Model
Use the code below to get started with the model.
```
Transformers: pip install git+https://github.com/SnowYJ/transformers-omni-v8.git
```
## 🔥 Training Details
### 🔥 Training Data
```
https://huggingface.co/datasets/omnimodel/nexus-o-v8-audio-data
```
### 🔥 Training Procedure
```
Train: https://github.com/SnowYJ/train-omni-v8/tree/swift_v3
```
## 🌟 Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
<!-- ### Testing Data, Factors & Metrics -->
### 🌟 Testing Data
<!-- This should link to a Dataset Card if possible. -->
### 🌟 Results
| Model | LLM-size | Video-MME | MMMU | MathV | Hal | AI2D | OCR | MMVet | MME |
|---------------------|-----------------|-----------|------|--------|------|------|------|------|------|
| **Vision-Language Models** | | | | | | | | | |
| MiniGPT-V2 6 | Qwen2-7B | 57.5 | 49.8 | 60.6 | 48.1 | 82.1 | 852.0| 60.0 |2268.7|
| Qwen2.5-VL | Qwen2.5-7B | 56.0 | 51.8 | 61.1 | 71.7 | 80.7 | 877.0| - |2291.1|
| **Omni-modal Models** | | | | | | | | | |
| VITA-1.5-Audio | Qwen2-7B | - | 52.1 | 66.2 | 44.9 | 79.3 | 732.0| 49.6 |2352.0|
| EMova-8B | LLaMA-3.1-8B | - | - | 61.1 | - | 82.8 | 824.0| 55.8 |2205.0|
| Baichuan-Omni-1.5 | - | 58.2 | 47.3 | 51.9 | 47.8 | - | - | 65.4 |2186.9|
| Megrez-3B-Omni | Megrez-3B | - | 51.8 | 62.0 | 50.1 | 82.0 | - | - |2315.0|
| **Proprietary** | | | | | | | | | |
| GPT-4V | - | 50.4 | 59.3 | 48.2 | 39.3 | 71.4 | 678.0| 49.0 |1790.3|
| GPT-4o mini | - | 54.8 | 60.0 | 52.4 | 46.1 | 77.8 | 785.0| 66.9 |2003.4|
| Gemini-1.5 Pro | 200B | 59.1 | 60.6 | 57.7 | 45.6 | 79.1 | 754.0| 64.0 |2110.6|
| GPT-4o | - | 61.6 | 62.8 | 56.5 | 51.7 | 77.4 | 663.0| 66.5 |2328.7|
| Claude3.5 Sonnet | 175B | 62.2 | 65.9 | 61.6 | 49.9 | 80.2 | 778.0| 66.0 |1920.0|
| **Ours** | | | | | | | | | |
| Nexus-O | Qwen2.5-VL-7B | 57.0 | 53.2 | 62.1 | 71.1 | 81.2 | 882.0| - |2315.5|
### ❤️ Citation
```BibTeX
@misc{liu2025nexusoomniperceptiveinteractivemodel,
title={Nexus-O: An Omni-Perceptive And -Interactive Model for Language, Audio, And Vision},
author={Che Liu and Yingji Zhang and Dong Zhang and Weijie Zhang and Chenggong Gong and Haohan Li and Yu Lu and Shilin Zhou and Yue Lu and Ziliang Gan and Ziao Wang and Junwei Liao and Haipang Wu and Ji Liu and André Freitas and Qifan Wang and Zenglin Xu and Rongjuncheng Zhang and Yong Dai},
year={2025},
eprint={2503.01879},
archivePrefix={arXiv},
primaryClass={cs.MM},
url={https://arxiv.org/abs/2503.01879},
}
``` |
unieai/aqua-u1-micro-2503-b1 | unieai | 2025-03-10T17:19:29Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-03-10T17:12:36Z | ---
license: apache-2.0
---
|
sazzadul/Shrutimala_Bangla_ASR | sazzadul | 2025-03-10T17:18:08Z | 136 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"asr",
"bangla",
"bangla-asr",
"wav2vec-bert",
"wav2vec-bert-bangla",
"bn",
"dataset:mozilla-foundation/common_voice_17_0",
"dataset:openslr/openslr",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-02-17T17:15:45Z | ---
datasets:
- mozilla-foundation/common_voice_17_0
- openslr/openslr
language:
- bn
metrics:
- wer
- cer
base_model:
- facebook/w2v-bert-2.0
pipeline_tag: automatic-speech-recognition
library_name: transformers
tags:
- asr
- bangla
- bangla-asr
- wav2vec-bert
- wav2vec-bert-bangla
license: cc-by-sa-4.0
---
# Model Card for Shrutimala Bangla ASR
## Model Details
### Model Description
This model is a fine-tuned version of `facebook/w2v-bert-2.0` for automatic speech recognition (ASR) in Bangla. The model has been trained on a large Bangla dataset, primarily sourced from Mozilla Common Voice 17.0, Common Voice 20.0, OpenSLR and achieves a Word Error Rate (WER) of 11%.
- **Developed by:** Sazzadul Islam
- **Model type:** Wav2Vec-BERT-based Bangla ASR model
- **Language(s):** Bangla (bn)
- **License:** CC-BY-SA-4.0
- **Fine-tuned from:** `facebook/w2v-bert-2.0`
<!-- ### Model Sources
- **Repository:** [Add Link]
- **Paper [optional]:** [Add Link]
- **Demo:** https://huggingface.co/spaces/sazzadul/Shrutimala_Bangla_ASR
-->
## Uses
### Direct Use
This model can be used for automatic speech recognition (ASR) in Bangla and English, with applications in transcription, voice assistants, and accessibility tools.
### Downstream Use
It can be further fine-tuned for domain-specific ASR tasks, including medical or legal transcription in Bangla.
### Out-of-Scope Use
- Not suitable for real-time ASR on low-power devices without optimization.
- May not perform well on noisy environments or highly accented regional dialects outside the training data.
## Bias, Risks, and Limitations
- The model may struggle with low-resource dialects and uncommon speech patterns.
- Biases may exist due to dataset imbalances in gender, age, or socio-economic backgrounds.
- Ethical considerations should be taken when using the model for surveillance or sensitive applications.
## How to Get Started with the Model
Use the following code snippet to load the model:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import torch
processor = Wav2Vec2Processor.from_pretrained("your_model_id")
model = Wav2Vec2ForCTC.from_pretrained("your_model_id")
# Load and process audio file
audio_input = ... # Provide audio tensor
inputs = processor(audio_input, return_tensors="pt", sampling_rate=16000)
# Perform ASR
with torch.no_grad():
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
print(transcription)
```
## Training Details
### Training Data
The model was trained on the Mozilla Common Voice 17.0, Common Voice 20.0 and OpenSLR dataset for Bangla.
### Training Procedure
#### Preprocessing
- Audio was resampled to 16kHz-8kHz-16kHz.
- Transcripts were normalized to improve ASR performance.
#### Training Hyperparameters
- **Batch Size:** 16
- **Learning Rate:** 1e-5
- **Training Steps:** 25000
- **Mixed Precision:** FP16
#### Training Time and Compute
- **Hardware Used:** RTX 4090
- **Training Time:** 37 Hours
- **Dataset Size:** 143k
## Evaluation
### Testing Data & Metrics
#### Metrics
- **WER:** 11.26%
- **CER:** 2.39
#### Factors
The model was evaluated on:
- Standard Bangla speech
- Various speaker demographics
### Results
- Performs well on clear, standard Bangla speech.
- Struggles with strong regional accents and noisy environments.
## Technical Specifications
### Model Architecture
The model is based on `facebook/w2v-bert-2.0`, a hybrid Wav2Vec2-BERT model for ASR.
<!-- ### Compute Infrastructure
- **Hardware:** [GPU/TPU used]
- **Software:** [Transformers version, PyTorch/TensorFlow version]
-->
## Contact
For any issues or inquiries, please contact [email protected]. |
TFOCUS/deep-fuk_3 | TFOCUS | 2025-03-10T17:17:48Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-03-10T17:03:03Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mradermacher/CardProjector-7B-v2-i1-GGUF | mradermacher | 2025-03-10T17:16:08Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-10T16:58:27Z | <!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/AlexBefest/CardProjector-7B-v2
|
mradermacher/r1-1.5b-longthought-v2-GGUF | mradermacher | 2025-03-10T17:14:09Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mkhalifa/r1-1.5b-longthought-v2",
"base_model:quantized:mkhalifa/r1-1.5b-longthought-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-10T17:00:44Z | ---
base_model: mkhalifa/r1-1.5b-longthought-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mkhalifa/r1-1.5b-longthought-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-v2-GGUF/resolve/main/r1-1.5b-longthought-v2.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-v2-GGUF/resolve/main/r1-1.5b-longthought-v2.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-v2-GGUF/resolve/main/r1-1.5b-longthought-v2.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-v2-GGUF/resolve/main/r1-1.5b-longthought-v2.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-v2-GGUF/resolve/main/r1-1.5b-longthought-v2.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-v2-GGUF/resolve/main/r1-1.5b-longthought-v2.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-v2-GGUF/resolve/main/r1-1.5b-longthought-v2.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-v2-GGUF/resolve/main/r1-1.5b-longthought-v2.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-v2-GGUF/resolve/main/r1-1.5b-longthought-v2.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-v2-GGUF/resolve/main/r1-1.5b-longthought-v2.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-v2-GGUF/resolve/main/r1-1.5b-longthought-v2.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-v2-GGUF/resolve/main/r1-1.5b-longthought-v2.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
TFOCUS/moneytrue-logic_27 | TFOCUS | 2025-03-10T17:12:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-10T17:07:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MikeZ3/ppo-SnowballTarget | MikeZ3 | 2025-03-10T17:11:46Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-03-10T17:11:43Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MikeZ3/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
texanrangee/fa812779-cf6c-4eff-a62c-2a3a410c7ebc | texanrangee | 2025-03-10T17:09:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-10T09:33:53Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mrrans23/Ghetto_Blaster | Mrrans23 | 2025-03-10T17:08:50Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-10T17:08:50Z | ---
license: apache-2.0
---
|
looppayments/llama-11b-merged | looppayments | 2025-03-10T17:08:15Z | 854 | 0 | transformers | [
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-02-12T22:00:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/ASTAROTH-3.2-1B-GGUF | mradermacher | 2025-03-10T17:05:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"1b",
"nsfw",
"uncensored",
"abliterated",
"rp",
"roleplay",
"es",
"en",
"base_model:Novaciano/ASTAROTH-3.2-1B",
"base_model:quantized:Novaciano/ASTAROTH-3.2-1B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-10T16:55:20Z | ---
base_model: Novaciano/ASTAROTH-3.2-1B
language:
- es
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- 1b
- nsfw
- uncensored
- abliterated
- rp
- roleplay
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Novaciano/ASTAROTH-3.2-1B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ASTAROTH-3.2-1B-GGUF/resolve/main/ASTAROTH-3.2-1B.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/ASTAROTH-3.2-1B-GGUF/resolve/main/ASTAROTH-3.2-1B.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/ASTAROTH-3.2-1B-GGUF/resolve/main/ASTAROTH-3.2-1B.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ASTAROTH-3.2-1B-GGUF/resolve/main/ASTAROTH-3.2-1B.Q3_K_L.gguf) | Q3_K_L | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/ASTAROTH-3.2-1B-GGUF/resolve/main/ASTAROTH-3.2-1B.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/ASTAROTH-3.2-1B-GGUF/resolve/main/ASTAROTH-3.2-1B.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ASTAROTH-3.2-1B-GGUF/resolve/main/ASTAROTH-3.2-1B.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ASTAROTH-3.2-1B-GGUF/resolve/main/ASTAROTH-3.2-1B.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/ASTAROTH-3.2-1B-GGUF/resolve/main/ASTAROTH-3.2-1B.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/ASTAROTH-3.2-1B-GGUF/resolve/main/ASTAROTH-3.2-1B.Q6_K.gguf) | Q6_K | 1.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ASTAROTH-3.2-1B-GGUF/resolve/main/ASTAROTH-3.2-1B.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ASTAROTH-3.2-1B-GGUF/resolve/main/ASTAROTH-3.2-1B.f16.gguf) | f16 | 3.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
TFOCUS/moneytrue-logic_26 | TFOCUS | 2025-03-10T17:02:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-10T16:59:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
logasja/auramask-vgg-sutro | logasja | 2025-03-10T17:00:31Z | 0 | 0 | keras | [
"keras",
"adversarial",
"aesthetic",
"quality",
"filter",
"image-to-image",
"dataset:logasja/FDF",
"base_model:logasja/ArcFace",
"base_model:finetune:logasja/ArcFace",
"license:gpl-3.0",
"region:us"
] | image-to-image | 2025-03-10T17:00:05Z | ---
library_name: keras
datasets:
- logasja/FDF
tags:
- adversarial
- aesthetic
- quality
- filter
metrics:
- TopIQ-FR
- ArcFace Cosine Distance
- VGGFace2 Cosine Distance
pipeline_tag: image-to-image
widget:
- text: input
output:
url: ./assets/input.png
- text: target
output:
url: ./assets/target.png
- text: output
output:
url: ./assets/output.png
license: gpl-3.0
base_model:
- vnet
- logasja/ArcFace
- logasja/VGGFace
---
<Gallery />
Training logs [here](https://wandb.ai/spuds/auramask/runs/22e222b34c53a124093bd69e9f7e7d1d)
# Model Description
This model uses a modified vnet for 2D input/output implemented [here](https://github.com/logasja/keras3-unets) with the following configuration.
```json
{
"activation": "ReLU",
"batch_norm": false,
"filter_num": [
128,
256,
512,
1024,
1024
],
"n_labels": 3,
"output_activation": "tanh",
"pool": false,
"res_num_ini": 1,
"res_num_max": 3,
"unpool": false
}
```
```json
{
"alpha": 0.0001,
"batch": 16,
"epochs": 500,
"epsilon": 1,
"input": "(256, 256)",
"losses": {
"FEAT_VGG-Face": {
"d": "cosine_similarity",
"f": "VGG-Face",
"name": "FEAT_VGG-Face",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.1
},
"IQASSIMC": {
"lower_better": false,
"name": "IQASSIMC",
"reduction": "sum_over_batch_size",
"weight": 0.5
},
"TopIQ": {
"full_ref": true,
"lower_better": false,
"name": "TopIQ",
"reduction": "sum_over_batch_size",
"score_range": "~0, ~1",
"weight": 0.5
}
},
"mixed_precision": true,
"optimizer": {
"amsgrad": false,
"beta_1": 0.9,
"beta_2": 0.999,
"clipnorm": null,
"clipvalue": null,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"epsilon": 1e-07,
"global_clipnorm": null,
"gradient_accumulation_steps": null,
"learning_rate": 9.999999747378752e-05,
"loss_scale_factor": null,
"name": "adamw",
"use_ema": false,
"weight_decay": 0.004
},
"seed": "BIIIIIGSTRETCH",
"testing": 0.01,
"training": 0.99
}
```
## Model Architecture Plot
 |
athitiya/personal | athitiya | 2025-03-10T16:59:32Z | 0 | 0 | null | [
"text-generation",
"en",
"ta",
"te",
"ur",
"fr",
"ml",
"ar",
"ru",
"cs",
"fa",
"dataset:open-thoughts/OpenThoughts-114k",
"base_model:deepseek-ai/DeepSeek-R1",
"base_model:finetune:deepseek-ai/DeepSeek-R1",
"license:openrail",
"region:us"
] | text-generation | 2025-03-10T16:53:36Z | ---
license: openrail
datasets:
- open-thoughts/OpenThoughts-114k
language:
- en
- ta
- te
- ur
- fr
- ml
- ar
- ru
- cs
- fa
metrics:
- character
base_model:
- deepseek-ai/DeepSeek-R1
new_version: deepseek-ai/DeepSeek-R1
pipeline_tag: text-generation
--- |
mradermacher/IRLLM-GGUF | mradermacher | 2025-03-10T16:58:13Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Fangyunhua/IRLLM",
"base_model:quantized:Fangyunhua/IRLLM",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-10T16:43:36Z | ---
base_model: Fangyunhua/IRLLM
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Fangyunhua/IRLLM
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/IRLLM-GGUF/resolve/main/IRLLM.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/IRLLM-GGUF/resolve/main/IRLLM.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/IRLLM-GGUF/resolve/main/IRLLM.Q3_K_M.gguf) | Q3_K_M | 1.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/IRLLM-GGUF/resolve/main/IRLLM.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/IRLLM-GGUF/resolve/main/IRLLM.IQ4_XS.gguf) | IQ4_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/IRLLM-GGUF/resolve/main/IRLLM.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IRLLM-GGUF/resolve/main/IRLLM.Q4_K_M.gguf) | Q4_K_M | 1.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IRLLM-GGUF/resolve/main/IRLLM.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/IRLLM-GGUF/resolve/main/IRLLM.Q5_K_M.gguf) | Q5_K_M | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/IRLLM-GGUF/resolve/main/IRLLM.Q6_K.gguf) | Q6_K | 1.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/IRLLM-GGUF/resolve/main/IRLLM.Q8_0.gguf) | Q8_0 | 2.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/IRLLM-GGUF/resolve/main/IRLLM.f16.gguf) | f16 | 3.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
logasja/auramask-vgg-poprocket | logasja | 2025-03-10T16:57:52Z | 0 | 0 | keras | [
"keras",
"adversarial",
"aesthetic",
"quality",
"filter",
"image-to-image",
"dataset:logasja/FDF",
"base_model:logasja/ArcFace",
"base_model:finetune:logasja/ArcFace",
"license:gpl-3.0",
"region:us"
] | image-to-image | 2025-03-10T16:57:21Z | ---
library_name: keras
datasets:
- logasja/FDF
tags:
- adversarial
- aesthetic
- quality
- filter
metrics:
- TopIQ-FR
- ArcFace Cosine Distance
- VGGFace2 Cosine Distance
pipeline_tag: image-to-image
widget:
- text: input
output:
url: ./assets/input.png
- text: target
output:
url: ./assets/target.png
- text: output
output:
url: ./assets/output.png
license: gpl-3.0
base_model:
- vnet
- logasja/ArcFace
- logasja/VGGFace
---
<Gallery />
Training logs [here](https://wandb.ai/spuds/auramask/runs/6e34709da3ee5b585227267c595cccb1)
# Model Description
This model uses a modified vnet for 2D input/output implemented [here](https://github.com/logasja/keras3-unets) with the following configuration.
```json
{
"activation": "ReLU",
"batch_norm": false,
"filter_num": [
128,
256,
512,
1024,
1024
],
"n_labels": 3,
"output_activation": "tanh",
"pool": false,
"res_num_ini": 1,
"res_num_max": 3,
"unpool": false
}
```
```json
{
"alpha": 0.0001,
"batch": 16,
"epochs": 500,
"epsilon": 1,
"input": "(256, 256)",
"losses": {
"FEAT_VGG-Face": {
"d": "cosine_similarity",
"f": "VGG-Face",
"name": "FEAT_VGG-Face",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.1
},
"IQASSIMC": {
"lower_better": false,
"name": "IQASSIMC",
"reduction": "sum_over_batch_size",
"weight": 0.5
},
"TopIQ": {
"full_ref": true,
"lower_better": false,
"name": "TopIQ",
"reduction": "sum_over_batch_size",
"score_range": "~0, ~1",
"weight": 0.5
}
},
"mixed_precision": true,
"optimizer": {
"amsgrad": false,
"beta_1": 0.9,
"beta_2": 0.999,
"clipnorm": null,
"clipvalue": null,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"epsilon": 1e-07,
"global_clipnorm": null,
"gradient_accumulation_steps": null,
"learning_rate": 9.999999747378752e-05,
"loss_scale_factor": null,
"name": "adamw",
"use_ema": false,
"weight_decay": 0.004
},
"seed": "BIIIIIGSTRETCH",
"testing": 0.01,
"training": 0.99
}
```
## Model Architecture Plot
 |
od2025/dark_kappa | od2025 | 2025-03-10T16:55:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"text-to-speech",
"annotation",
"en",
"dataset:parler-tts/mls_eng",
"dataset:parler-tts/libritts_r_filtered",
"dataset:parler-tts/libritts-r-filtered-speaker-descriptions",
"dataset:parler-tts/mls-eng-speaker-descriptions",
"arxiv:2402.01912",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2025-03-10T16:22:41Z | ---
library_name: transformers
tags:
- text-to-speech
- annotation
license: apache-2.0
language:
- en
pipeline_tag: text-to-speech
inference: false
datasets:
- parler-tts/mls_eng
- parler-tts/libritts_r_filtered
- parler-tts/libritts-r-filtered-speaker-descriptions
- parler-tts/mls-eng-speaker-descriptions
---
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Parler-TTS Mini v1
<a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler_tts">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
**Parler-TTS Mini v1** is a lightweight text-to-speech (TTS) model, trained on 45K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation).
With [Parler-TTS Large v1](https://huggingface.co/parler-tts/parler-tts-large-v1), this is the second set of models published as part of the [Parler-TTS](https://github.com/huggingface/parler-tts) project, which aims to provide the community with TTS training resources and dataset pre-processing code.
## 📖 Quick Index
* [👨💻 Installation](#👨💻-installation)
* [🎲 Using a random voice](#🎲-random-voice)
* [🎯 Using a specific speaker](#🎯-using-a-specific-speaker)
* [Motivation](#motivation)
* [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md)
## 🛠️ Usage
### 👨💻 Installation
Using Parler-TTS is as simple as "bonjour". Simply install the library once:
```sh
pip install git+https://github.com/huggingface/parler-tts.git
```
### 🎲 Random voice
**Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
### 🎯 Using a specific speaker
To ensure speaker consistency across generations, this checkpoint was also trained on 34 speakers, characterized by name (e.g. Jon, Lea, Gary, Jenna, Mike, Laura).
To take advantage of this, simply adapt your text description to specify which speaker to use: `Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
**Tips**:
* We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming!
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt
## Motivation
Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
Parler-TTS was released alongside:
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
## Citation
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
```
@misc{lacombe-etal-2024-parler-tts,
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
title = {Parler-TTS},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
```
```
@misc{lyth2024natural,
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
author={Dan Lyth and Simon King},
year={2024},
eprint={2402.01912},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
## License
This model is permissively licensed under the Apache 2.0 license. |
glif-loradex-trainer/quitters_BalatroStyle | glif-loradex-trainer | 2025-03-10T16:55:48Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2025-03-10T16:55:42Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1741625672323__000001000_0.jpg
text: wounded centaur, mythical creature balatro
- output:
url: samples/1741625697262__000001000_1.jpg
text: ruins of athens, snake balatro
- output:
url: samples/1741625722278__000001000_2.jpg
text: silver vampire sword balatro
base_model: black-forest-labs/FLUX.1-dev
trigger: "balatro"
instance_prompt: "balatro"
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# BalatroStyle
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `quitters`.
<Gallery />
## Trigger words
You should use `balatro` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/quitters_BalatroStyle/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
Adolfo-GM/AggmGPT-1.5 | Adolfo-GM | 2025-03-10T16:54:56Z | 0 | 1 | null | [
"en",
"license:mit",
"region:us"
] | null | 2025-03-08T19:02:03Z | ---
license: mit
language:
- en
---
# AggmGPT-1.5
AggmGPT-1.5 is a lightweight language model developed by Adolfo GM based on AggmGPT-1, designed to generate human-like text using n-gram models combined with self-attention mechanisms. The project is licensed under the MIT License, making it open-source and free for modification and distribution. AggmGPT-1.5 is a far more capable model than its predecessor, AggmGPT-1, and is capable of generating text that is more coherent and human-like, while still being very small compared to other language models. AggmGPT-1.5 is less than 500 KB in size, making it ideal for use in embedded systems and other resource-constrained environments.
## Examples

AggmGPT-1.5 is great at answering simple questions.

The script has a built in grammar correction that most of the time works very well.

However with this example we can clearly see that the model is not perfect and sometimes it can generate text that is not coherent.
## Files
- `AggmGPT1_5.py`: The main script that generates text using the AggmGPT-1.5 model.
- `example.py`: An example of how to use AggmGPT-1.5 to generate text.
- `data.py`: The training data used to train the AggmGPT-1.5 model.
In conclusion, AggmGPT-1.5 is a powerful and lightweight language model that is capable of generating human-like text. The project is open-source and free for modification and distribution, making it a great choice for developers looking for a lightweight language model that is easy to use and customize. |
logasja/auramask-vgg-maven | logasja | 2025-03-10T16:52:58Z | 0 | 0 | keras | [
"keras",
"adversarial",
"aesthetic",
"quality",
"filter",
"image-to-image",
"dataset:logasja/FDF",
"base_model:logasja/ArcFace",
"base_model:finetune:logasja/ArcFace",
"license:gpl-3.0",
"region:us"
] | image-to-image | 2025-03-10T16:52:26Z | ---
library_name: keras
datasets:
- logasja/FDF
tags:
- adversarial
- aesthetic
- quality
- filter
metrics:
- TopIQ-FR
- ArcFace Cosine Distance
- VGGFace2 Cosine Distance
pipeline_tag: image-to-image
widget:
- text: input
output:
url: ./assets/input.png
- text: target
output:
url: ./assets/target.png
- text: output
output:
url: ./assets/output.png
license: gpl-3.0
base_model:
- vnet
- logasja/ArcFace
- logasja/VGGFace
---
<Gallery />
Training logs [here](https://wandb.ai/spuds/auramask/runs/e6394c21209cbdc8623c9e5364e51567)
# Model Description
This model uses a modified vnet for 2D input/output implemented [here](https://github.com/logasja/keras3-unets) with the following configuration.
```json
{
"activation": "ReLU",
"batch_norm": false,
"filter_num": [
128,
256,
512,
1024,
1024
],
"n_labels": 3,
"output_activation": "tanh",
"pool": false,
"res_num_ini": 1,
"res_num_max": 3,
"unpool": false
}
```
```json
{
"alpha": 0.0001,
"batch": 16,
"epochs": 500,
"epsilon": 1,
"input": "(256, 256)",
"losses": {
"FEAT_VGG-Face": {
"d": "cosine_similarity",
"f": "VGG-Face",
"name": "FEAT_VGG-Face",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.1
},
"IQASSIMC": {
"lower_better": false,
"name": "IQASSIMC",
"reduction": "sum_over_batch_size",
"weight": 0.5
},
"TopIQ": {
"full_ref": true,
"lower_better": false,
"name": "TopIQ",
"reduction": "sum_over_batch_size",
"score_range": "~0, ~1",
"weight": 0.5
}
},
"mixed_precision": true,
"optimizer": {
"amsgrad": false,
"beta_1": 0.9,
"beta_2": 0.999,
"clipnorm": null,
"clipvalue": null,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"epsilon": 1e-07,
"global_clipnorm": null,
"gradient_accumulation_steps": null,
"learning_rate": 9.999999747378752e-05,
"loss_scale_factor": null,
"name": "adamw",
"use_ema": false,
"weight_decay": 0.004
},
"seed": "BIIIIIGSTRETCH",
"testing": 0.01,
"training": 0.99
}
```
## Model Architecture Plot
 |
mradermacher/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall-GGUF | mradermacher | 2025-03-10T16:52:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"ja",
"dataset:SousiOmine/Japanese-Pythonic-FunctionCall",
"dataset:Kendamarron/jimba-instruction-all",
"base_model:SousiOmine/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall",
"base_model:quantized:SousiOmine/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-10T16:31:07Z | ---
base_model: SousiOmine/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall
datasets:
- SousiOmine/Japanese-Pythonic-FunctionCall
- Kendamarron/jimba-instruction-all
language:
- ja
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- unsloth
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SousiOmine/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall-GGUF/resolve/main/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall-GGUF/resolve/main/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall-GGUF/resolve/main/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall-GGUF/resolve/main/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall-GGUF/resolve/main/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall.IQ4_XS.gguf) | IQ4_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall-GGUF/resolve/main/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall.Q4_K_S.gguf) | Q4_K_S | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall-GGUF/resolve/main/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall.Q4_K_M.gguf) | Q4_K_M | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall-GGUF/resolve/main/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall-GGUF/resolve/main/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall.Q5_K_M.gguf) | Q5_K_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall-GGUF/resolve/main/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall.Q6_K.gguf) | Q6_K | 2.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall-GGUF/resolve/main/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall.Q8_0.gguf) | Q8_0 | 3.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall-GGUF/resolve/main/sarashina2.2-3b-instruct-v0.1-Pythonic-FunctionCall.f16.gguf) | f16 | 6.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
k0n8/Stubb | k0n8 | 2025-03-10T16:51:24Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-10T16:24:16Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Stu55
---
# Stubb
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Stu55` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('k0n8/Stubb', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Raviravi99/E | Raviravi99 | 2025-03-10T16:51:22Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-10T16:51:22Z | ---
license: apache-2.0
---
|
PhongNgoGia/Qwen2.5-1.5B-GRPO_SFT | PhongNgoGia | 2025-03-10T16:50:54Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:passport_en_grpo",
"arxiv:2402.03300",
"base_model:PhongNgoGia/Qwen2.5-1.5B-Lora",
"base_model:finetune:PhongNgoGia/Qwen2.5-1.5B-Lora",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T07:17:54Z | ---
base_model: PhongNgoGia/Qwen2.5-1.5B-Lora
datasets: passport_en_grpo
library_name: transformers
model_name: Qwen2.5-1.5B-GRPO_SFT
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-1.5B-GRPO_SFT
This model is a fine-tuned version of [PhongNgoGia/Qwen2.5-1.5B-Lora](https://huggingface.co/PhongNgoGia/Qwen2.5-1.5B-Lora) on the [passport_en_grpo](https://huggingface.co/datasets/passport_en_grpo) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="PhongNgoGia/Qwen2.5-1.5B-GRPO_SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/k9/grpo/runs/qsuw59w5)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
logasja/auramask-vgg-lark | logasja | 2025-03-10T16:50:48Z | 0 | 0 | keras | [
"keras",
"adversarial",
"aesthetic",
"quality",
"filter",
"image-to-image",
"dataset:logasja/FDF",
"base_model:logasja/ArcFace",
"base_model:finetune:logasja/ArcFace",
"license:gpl-3.0",
"region:us"
] | image-to-image | 2025-03-10T16:49:59Z | ---
library_name: keras
datasets:
- logasja/FDF
tags:
- adversarial
- aesthetic
- quality
- filter
metrics:
- TopIQ-FR
- ArcFace Cosine Distance
- VGGFace2 Cosine Distance
pipeline_tag: image-to-image
widget:
- text: input
output:
url: ./assets/input.png
- text: target
output:
url: ./assets/target.png
- text: output
output:
url: ./assets/output.png
license: gpl-3.0
base_model:
- vnet
- logasja/ArcFace
- logasja/VGGFace
---
<Gallery />
Training logs [here](https://wandb.ai/spuds/auramask/runs/50c31e14a8fc5937c0c4f9d3b5488add)
# Model Description
This model uses a modified vnet for 2D input/output implemented [here](https://github.com/logasja/keras3-unets) with the following configuration.
```json
{
"activation": "ReLU",
"batch_norm": false,
"filter_num": [
128,
256,
512,
1024,
1024
],
"n_labels": 3,
"output_activation": "tanh",
"pool": false,
"res_num_ini": 1,
"res_num_max": 3,
"unpool": false
}
```
```json
{
"alpha": 0.0001,
"batch": 16,
"epochs": 500,
"epsilon": 1,
"input": "(256, 256)",
"losses": {
"FEAT_VGG-Face": {
"d": "cosine_similarity",
"f": "VGG-Face",
"name": "FEAT_VGG-Face",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.1
},
"IQASSIMC": {
"lower_better": false,
"name": "IQASSIMC",
"reduction": "sum_over_batch_size",
"weight": 0.5
},
"TopIQ": {
"full_ref": true,
"lower_better": false,
"name": "TopIQ",
"reduction": "sum_over_batch_size",
"score_range": "~0, ~1",
"weight": 0.5
}
},
"mixed_precision": true,
"optimizer": {
"amsgrad": false,
"beta_1": 0.9,
"beta_2": 0.999,
"clipnorm": null,
"clipvalue": null,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"epsilon": 1e-07,
"global_clipnorm": null,
"gradient_accumulation_steps": null,
"learning_rate": 9.999999747378752e-05,
"loss_scale_factor": null,
"name": "adamw",
"use_ema": false,
"weight_decay": 0.004
},
"seed": "BIIIIIGSTRETCH",
"testing": 0.01,
"training": 0.99
}
```
## Model Architecture Plot
 |
logasja/auramask-vgg-kelvin | logasja | 2025-03-10T16:49:20Z | 0 | 0 | keras | [
"keras",
"adversarial",
"aesthetic",
"quality",
"filter",
"image-to-image",
"dataset:logasja/FDF",
"base_model:logasja/ArcFace",
"base_model:finetune:logasja/ArcFace",
"license:gpl-3.0",
"region:us"
] | image-to-image | 2025-03-10T16:48:48Z | ---
library_name: keras
datasets:
- logasja/FDF
tags:
- adversarial
- aesthetic
- quality
- filter
metrics:
- TopIQ-FR
- ArcFace Cosine Distance
- VGGFace2 Cosine Distance
pipeline_tag: image-to-image
widget:
- text: input
output:
url: ./assets/input.png
- text: target
output:
url: ./assets/target.png
- text: output
output:
url: ./assets/output.png
license: gpl-3.0
base_model:
- vnet
- logasja/ArcFace
- logasja/VGGFace
---
<Gallery />
Training logs [here](https://wandb.ai/spuds/auramask/runs/db4f4582c30f387d0bd428355d6fb3db)
# Model Description
This model uses a modified vnet for 2D input/output implemented [here](https://github.com/logasja/keras3-unets) with the following configuration.
```json
{
"activation": "ReLU",
"batch_norm": false,
"filter_num": [
128,
256,
512,
1024,
1024
],
"n_labels": 3,
"output_activation": "tanh",
"pool": false,
"res_num_ini": 1,
"res_num_max": 3,
"unpool": false
}
```
```json
{
"alpha": 0.0001,
"batch": 16,
"epochs": 500,
"epsilon": 1,
"input": "(256, 256)",
"losses": {
"FEAT_VGG-Face": {
"d": "cosine_similarity",
"f": "VGG-Face",
"name": "FEAT_VGG-Face",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.1
},
"IQASSIMC": {
"lower_better": false,
"name": "IQASSIMC",
"reduction": "sum_over_batch_size",
"weight": 0.5
},
"TopIQ": {
"full_ref": true,
"lower_better": false,
"name": "TopIQ",
"reduction": "sum_over_batch_size",
"score_range": "~0, ~1",
"weight": 0.5
}
},
"mixed_precision": true,
"optimizer": {
"amsgrad": false,
"beta_1": 0.9,
"beta_2": 0.999,
"clipnorm": null,
"clipvalue": null,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"epsilon": 1e-07,
"global_clipnorm": null,
"gradient_accumulation_steps": null,
"learning_rate": 9.999999747378752e-05,
"loss_scale_factor": null,
"name": "adamw",
"use_ema": false,
"weight_decay": 0.004
},
"seed": "BIIIIIGSTRETCH",
"testing": 0.01,
"training": 0.99
}
```
## Model Architecture Plot
 |
logasja/auramask-vgg-juno | logasja | 2025-03-10T16:48:32Z | 0 | 0 | keras | [
"keras",
"adversarial",
"aesthetic",
"quality",
"filter",
"image-to-image",
"dataset:logasja/FDF",
"base_model:logasja/ArcFace",
"base_model:finetune:logasja/ArcFace",
"license:gpl-3.0",
"region:us"
] | image-to-image | 2025-03-10T16:48:04Z | ---
library_name: keras
datasets:
- logasja/FDF
tags:
- adversarial
- aesthetic
- quality
- filter
metrics:
- TopIQ-FR
- ArcFace Cosine Distance
- VGGFace2 Cosine Distance
pipeline_tag: image-to-image
widget:
- text: input
output:
url: ./assets/input.png
- text: target
output:
url: ./assets/target.png
- text: output
output:
url: ./assets/output.png
license: gpl-3.0
base_model:
- vnet
- logasja/ArcFace
- logasja/VGGFace
---
<Gallery />
Training logs [here](https://wandb.ai/spuds/auramask/runs/6a2e78f9a239c32742807b26259a5c42)
# Model Description
This model uses a modified vnet for 2D input/output implemented [here](https://github.com/logasja/keras3-unets) with the following configuration.
```json
{
"activation": "ReLU",
"batch_norm": false,
"filter_num": [
128,
256,
512,
1024,
1024
],
"n_labels": 3,
"output_activation": "tanh",
"pool": false,
"res_num_ini": 1,
"res_num_max": 3,
"unpool": false
}
```
```json
{
"alpha": 0.0001,
"batch": 16,
"epochs": 500,
"epsilon": 1,
"input": "(256, 256)",
"losses": {
"FEAT_VGG-Face": {
"d": "cosine_similarity",
"f": "VGG-Face",
"name": "FEAT_VGG-Face",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.1
},
"IQASSIMC": {
"lower_better": false,
"name": "IQASSIMC",
"reduction": "sum_over_batch_size",
"weight": 0.5
},
"TopIQ": {
"full_ref": true,
"lower_better": false,
"name": "TopIQ",
"reduction": "sum_over_batch_size",
"score_range": "~0, ~1",
"weight": 0.5
}
},
"mixed_precision": true,
"optimizer": {
"amsgrad": false,
"beta_1": 0.9,
"beta_2": 0.999,
"clipnorm": null,
"clipvalue": null,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"epsilon": 1e-07,
"global_clipnorm": null,
"gradient_accumulation_steps": null,
"learning_rate": 9.999999747378752e-05,
"loss_scale_factor": null,
"name": "adamw",
"use_ema": false,
"weight_decay": 0.004
},
"seed": "BIIIIIGSTRETCH",
"testing": 0.01,
"training": 0.99
}
```
## Model Architecture Plot
 |
logasja/auramask-vgg-helena | logasja | 2025-03-10T16:47:13Z | 0 | 0 | keras | [
"keras",
"adversarial",
"aesthetic",
"quality",
"filter",
"image-to-image",
"dataset:logasja/FDF",
"base_model:logasja/ArcFace",
"base_model:finetune:logasja/ArcFace",
"license:gpl-3.0",
"region:us"
] | image-to-image | 2025-03-10T16:46:40Z | ---
library_name: keras
datasets:
- logasja/FDF
tags:
- adversarial
- aesthetic
- quality
- filter
metrics:
- TopIQ-FR
- ArcFace Cosine Distance
- VGGFace2 Cosine Distance
pipeline_tag: image-to-image
widget:
- text: input
output:
url: ./assets/input.png
- text: target
output:
url: ./assets/target.png
- text: output
output:
url: ./assets/output.png
license: gpl-3.0
base_model:
- vnet
- logasja/ArcFace
- logasja/VGGFace
---
<Gallery />
Training logs [here](https://wandb.ai/spuds/auramask/runs/00f8f682c9dd7d8116e067e2c9d2dc07)
# Model Description
This model uses a modified vnet for 2D input/output implemented [here](https://github.com/logasja/keras3-unets) with the following configuration.
```json
{
"activation": "ReLU",
"batch_norm": false,
"filter_num": [
128,
256,
512,
1024,
1024
],
"n_labels": 3,
"output_activation": "tanh",
"pool": false,
"res_num_ini": 1,
"res_num_max": 3,
"unpool": false
}
```
```json
{
"alpha": 0.0001,
"batch": 16,
"epochs": 500,
"epsilon": 1,
"input": "(256, 256)",
"losses": {
"FEAT_VGG-Face": {
"d": "cosine_similarity",
"f": "VGG-Face",
"name": "FEAT_VGG-Face",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.1
},
"IQASSIMC": {
"lower_better": false,
"name": "IQASSIMC",
"reduction": "sum_over_batch_size",
"weight": 0.5
},
"TopIQ": {
"full_ref": true,
"lower_better": false,
"name": "TopIQ",
"reduction": "sum_over_batch_size",
"score_range": "~0, ~1",
"weight": 0.5
}
},
"mixed_precision": true,
"optimizer": {
"amsgrad": false,
"beta_1": 0.9,
"beta_2": 0.999,
"clipnorm": null,
"clipvalue": null,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"epsilon": 1e-07,
"global_clipnorm": null,
"gradient_accumulation_steps": null,
"learning_rate": 9.999999747378752e-05,
"loss_scale_factor": null,
"name": "adamw",
"use_ema": false,
"weight_decay": 0.004
},
"seed": "BIIIIIGSTRETCH",
"testing": 0.01,
"training": 0.99
}
```
## Model Architecture Plot
 |
sharanharsoor/ppo-LunarLander-v2 | sharanharsoor | 2025-03-10T16:46:39Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-03-10T16:46:20Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -480.83 +/- 68.51
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
logasja/auramask-vgg-gingham | logasja | 2025-03-10T16:46:34Z | 0 | 0 | keras | [
"keras",
"adversarial",
"aesthetic",
"quality",
"filter",
"image-to-image",
"dataset:logasja/FDF",
"base_model:logasja/ArcFace",
"base_model:finetune:logasja/ArcFace",
"license:gpl-3.0",
"region:us"
] | image-to-image | 2025-03-10T16:46:02Z | ---
library_name: keras
datasets:
- logasja/FDF
tags:
- adversarial
- aesthetic
- quality
- filter
metrics:
- TopIQ-FR
- ArcFace Cosine Distance
- VGGFace2 Cosine Distance
pipeline_tag: image-to-image
widget:
- text: input
output:
url: ./assets/input.png
- text: target
output:
url: ./assets/target.png
- text: output
output:
url: ./assets/output.png
license: gpl-3.0
base_model:
- vnet
- logasja/ArcFace
- logasja/VGGFace
---
<Gallery />
Training logs [here](https://wandb.ai/spuds/auramask/runs/6185ef333f1a4da4770b2c82bd1cf9f8)
# Model Description
This model uses a modified vnet for 2D input/output implemented [here](https://github.com/logasja/keras3-unets) with the following configuration.
```json
{
"activation": "ReLU",
"batch_norm": false,
"filter_num": [
128,
256,
512,
1024,
1024
],
"n_labels": 3,
"output_activation": "tanh",
"pool": false,
"res_num_ini": 1,
"res_num_max": 3,
"unpool": false
}
```
```json
{
"alpha": 0.0001,
"batch": 16,
"epochs": 500,
"epsilon": 1,
"input": "(256, 256)",
"losses": {
"FEAT_VGG-Face": {
"d": "cosine_similarity",
"f": "VGG-Face",
"name": "FEAT_VGG-Face",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.1
},
"IQASSIMC": {
"lower_better": false,
"name": "IQASSIMC",
"reduction": "sum_over_batch_size",
"weight": 0.5
},
"TopIQ": {
"full_ref": true,
"lower_better": false,
"name": "TopIQ",
"reduction": "sum_over_batch_size",
"score_range": "~0, ~1",
"weight": 0.5
}
},
"mixed_precision": true,
"optimizer": {
"amsgrad": false,
"beta_1": 0.9,
"beta_2": 0.999,
"clipnorm": null,
"clipvalue": null,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"epsilon": 1e-07,
"global_clipnorm": null,
"gradient_accumulation_steps": null,
"learning_rate": 9.999999747378752e-05,
"loss_scale_factor": null,
"name": "adamw",
"use_ema": false,
"weight_decay": 0.004
},
"seed": "BIIIIIGSTRETCH",
"testing": 0.01,
"training": 0.99
}
```
## Model Architecture Plot
 |
ClarenceDan/cf49ed5c-c448-47fb-9709-eda552202831 | ClarenceDan | 2025-03-10T16:45:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M",
"base_model:adapter:unsloth/SmolLM-135M",
"license:apache-2.0",
"region:us"
] | null | 2025-03-10T15:28:51Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cf49ed5c-c448-47fb-9709-eda552202831
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-135M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8394bbed174a5ca4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8394bbed174a5ca4_train_data.json
type:
field_input: subarea
field_instruction: principle
field_output: goal
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/cf49ed5c-c448-47fb-9709-eda552202831
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/8394bbed174a5ca4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ca99dcef-3003-458a-bed3-6ee4ba42b30d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ca99dcef-3003-458a-bed3-6ee4ba42b30d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cf49ed5c-c448-47fb-9709-eda552202831
This model is a fine-tuned version of [unsloth/SmolLM-135M](https://huggingface.co/unsloth/SmolLM-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 3 | nan |
| 0.0 | 0.0002 | 6 | nan |
| 0.0 | 0.0002 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
logasja/auramask-vgg-ashby | logasja | 2025-03-10T16:44:59Z | 0 | 0 | keras | [
"keras",
"adversarial",
"aesthetic",
"quality",
"filter",
"image-to-image",
"dataset:logasja/FDF",
"base_model:logasja/ArcFace",
"base_model:finetune:logasja/ArcFace",
"license:gpl-3.0",
"region:us"
] | image-to-image | 2025-03-10T16:44:32Z | ---
library_name: keras
datasets:
- logasja/FDF
tags:
- adversarial
- aesthetic
- quality
- filter
metrics:
- TopIQ-FR
- ArcFace Cosine Distance
- VGGFace2 Cosine Distance
pipeline_tag: image-to-image
widget:
- text: input
output:
url: ./assets/input.png
- text: target
output:
url: ./assets/target.png
- text: output
output:
url: ./assets/output.png
license: gpl-3.0
base_model:
- vnet
- logasja/ArcFace
- logasja/VGGFace
---
<Gallery />
Training logs [here](https://wandb.ai/spuds/auramask/runs/96ab8e61346979b3d192883c176d090f)
# Model Description
This model uses a modified vnet for 2D input/output implemented [here](https://github.com/logasja/keras3-unets) with the following configuration.
```json
{
"activation": "ReLU",
"batch_norm": false,
"filter_num": [
128,
256,
512,
1024,
1024
],
"n_labels": 3,
"output_activation": "tanh",
"pool": false,
"res_num_ini": 1,
"res_num_max": 3,
"unpool": false
}
```
```json
{
"alpha": 0.0001,
"batch": 16,
"epochs": 500,
"epsilon": 1,
"input": "(256, 256)",
"losses": {
"FEAT_VGG-Face": {
"d": "cosine_similarity",
"f": "VGG-Face",
"name": "FEAT_VGG-Face",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.1
},
"IQASSIMC": {
"lower_better": false,
"name": "IQASSIMC",
"reduction": "sum_over_batch_size",
"weight": 0.5
},
"TopIQ": {
"full_ref": true,
"lower_better": false,
"name": "TopIQ",
"reduction": "sum_over_batch_size",
"score_range": "~0, ~1",
"weight": 0.5
}
},
"mixed_precision": true,
"optimizer": {
"amsgrad": false,
"beta_1": 0.9,
"beta_2": 0.999,
"clipnorm": null,
"clipvalue": null,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"epsilon": 1e-07,
"global_clipnorm": null,
"gradient_accumulation_steps": null,
"learning_rate": 9.999999747378752e-05,
"loss_scale_factor": null,
"name": "adamw",
"use_ema": false,
"weight_decay": 0.004
},
"seed": "BIIIIIGSTRETCH",
"testing": 0.01,
"training": 0.99
}
```
## Model Architecture Plot
 |
texanrangee/e11ab0e8-8e09-495f-9c48-3d3005989fe8 | texanrangee | 2025-03-10T16:44:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-10T16:25:30Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prasenjeet099/zllm2 | prasenjeet099 | 2025-03-10T16:41:29Z | 0 | 0 | null | [
"zbrain",
"text-classification",
"pytorch",
"tensorflow",
"zero-shot-classification",
"multilingual",
"en",
"fr",
"es",
"de",
"el",
"bg",
"ru",
"tr",
"ar",
"vi",
"th",
"zh",
"hi",
"sw",
"ur",
"dataset:multi_nli",
"dataset:xnli",
"arxiv:1911.02116",
"license:mit",
"region:us"
] | zero-shot-classification | 2025-03-10T03:51:02Z | ---
language:
- multilingual
- en
- fr
- es
- de
- el
- bg
- ru
- tr
- ar
- vi
- th
- zh
- hi
- sw
- ur
tags:
- text-classification
- pytorch
- tensorflow
datasets:
- multi_nli
- xnli
license: mit
pipeline_tag: zero-shot-classification
widget:
- text: "За кого вы голосуете в 2020 году?"
candidate_labels: "politique étrangère, Europe, élections, affaires, politique"
multi_class: true
- text: "لمن تصوت في 2020؟"
candidate_labels: "السياسة الخارجية, أوروبا, الانتخابات, الأعمال, السياسة"
multi_class: true
- text: "2020'de kime oy vereceksiniz?"
candidate_labels: "dış politika, Avrupa, seçimler, ticaret, siyaset"
multi_class: true
---
# xlm-roberta-large-xnli
## Model Description
This model takes [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and fine-tunes it on a combination of NLI data in 15 languages. It is intended to be used for zero-shot text classification, such as with the Hugging Face [ZeroShotClassificationPipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline).
## Intended Usage
This model is intended to be used for zero-shot text classification, especially in languages other than English. It is fine-tuned on XNLI, which is a multilingual NLI dataset. The model can therefore be used with any of the languages in the XNLI corpus:
- English
- French
- Spanish
- German
- Greek
- Bulgarian
- Russian
- Turkish
- Arabic
- Vietnamese
- Thai
- Chinese
- Hindi
- Swahili
- Urdu
Since the base model was pre-trained trained on 100 different languages, the
model has shown some effectiveness in languages beyond those listed above as
well. See the full list of pre-trained languages in appendix A of the
[XLM Roberata paper](https://arxiv.org/abs/1911.02116)
For English-only classification, it is recommended to use
[bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) or
[a distilled bart MNLI model](https://huggingface.co/models?filter=pipeline_tag%3Azero-shot-classification&search=valhalla).
#### With the zero-shot classification pipeline
The model can be loaded with the `zero-shot-classification` pipeline like so:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="joeddav/xlm-roberta-large-xnli")
```
You can then classify in any of the above languages. You can even pass the labels in one language and the sequence to
classify in another:
```python
# we will classify the Russian translation of, "Who are you voting for in 2020?"
sequence_to_classify = "За кого вы голосуете в 2020 году?"
# we can specify candidate labels in Russian or any other language above:
candidate_labels = ["Europe", "public health", "politics"]
classifier(sequence_to_classify, candidate_labels)
# {'labels': ['politics', 'Europe', 'public health'],
# 'scores': [0.9048484563827515, 0.05722189322113991, 0.03792969882488251],
# 'sequence': 'За кого вы голосуете в 2020 году?'}
```
The default hypothesis template is the English, `This text is {}`. If you are working strictly within one language, it
may be worthwhile to translate this to the language you are working with:
```python
sequence_to_classify = "¿A quién vas a votar en 2020?"
candidate_labels = ["Europa", "salud pública", "política"]
hypothesis_template = "Este ejemplo es {}."
classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template)
# {'labels': ['política', 'Europa', 'salud pública'],
# 'scores': [0.9109585881233215, 0.05954807624220848, 0.029493311420083046],
# 'sequence': '¿A quién vas a votar en 2020?'}
```
#### With manual PyTorch
```python
# pose sequence as a NLI premise and label as a hypothesis
from transformers import AutoModelForSequenceClassification, AutoTokenizer
nli_model = AutoModelForSequenceClassification.from_pretrained('joeddav/xlm-roberta-large-xnli')
tokenizer = AutoTokenizer.from_pretrained('joeddav/xlm-roberta-large-xnli')
premise = sequence
hypothesis = f'This example is {label}.'
# run through model pre-trained on MNLI
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
truncation_strategy='only_first')
logits = nli_model(x.to(device))[0]
# we throw away "neutral" (dim 1) and take the probability of
# "entailment" (2) as the probability of the label being true
entail_contradiction_logits = logits[:,[0,2]]
probs = entail_contradiction_logits.softmax(dim=1)
prob_label_is_true = probs[:,1]
```
## Training
This model was pre-trained on set of 100 languages, as described in
[the original paper](https://arxiv.org/abs/1911.02116). It was then fine-tuned on the task of NLI on the concatenated
MNLI train set and the XNLI validation and test sets. Finally, it was trained for one additional epoch on only XNLI
data where the translations for the premise and hypothesis are shuffled such that the premise and hypothesis for
each example come from the same original English example but the premise and hypothesis are of different languages.
|
ikenna1234/llama_3.2_1b_instruct_custom_reward_model | ikenna1234 | 2025-03-10T16:40:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-10T16:39:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fire0517/model_test1 | fire0517 | 2025-03-10T16:39:54Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-03-08T03:02:09Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v3
type: LunarLander-v3
metrics:
- type: mean_reward
value: -1080.50 +/- 1050.26
name: mean_reward
verified: false
---
# **DQN** Agent playing **LunarLander-v3**
This is a trained model of a **DQN** agent playing **LunarLander-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Alphatao/942c4135-e225-4072-a929-7998548563ef | Alphatao | 2025-03-10T16:39:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-mistral",
"base_model:adapter:echarlaix/tiny-random-mistral",
"license:apache-2.0",
"region:us"
] | null | 2025-03-10T16:06:49Z | ---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-mistral
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 942c4135-e225-4072-a929-7998548563ef
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-mistral
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f3d0d4415de730db_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f3d0d4415de730db_train_data.json
type:
field_input: Moreinfo
field_instruction: Position
field_output: CV
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
device_map:
? ''
: 0,1,2,3,4,5,6,7
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: true
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Alphatao/942c4135-e225-4072-a929-7998548563ef
hub_repo: null
hub_strategy: null
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- down_proj
- up_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 4679
micro_batch_size: 4
mlflow_experiment_name: /tmp/f3d0d4415de730db_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.02390628735357399
wandb_entity: null
wandb_mode: online
wandb_name: 213b2f40-7d78-40ca-8cc0-85380681cac5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 213b2f40-7d78-40ca-8cc0-85380681cac5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 942c4135-e225-4072-a929-7998548563ef
This model is a fine-tuned version of [echarlaix/tiny-random-mistral](https://huggingface.co/echarlaix/tiny-random-mistral) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.2514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 4679
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 83.0184 | 0.0002 | 1 | 10.3771 |
| 82.4725 | 0.0157 | 100 | 10.3081 |
| 82.3512 | 0.0313 | 200 | 10.2932 |
| 82.3996 | 0.0470 | 300 | 10.2884 |
| 82.2037 | 0.0627 | 400 | 10.2842 |
| 82.2038 | 0.0784 | 500 | 10.2788 |
| 82.1844 | 0.0940 | 600 | 10.2746 |
| 82.1807 | 0.1097 | 700 | 10.2714 |
| 82.2035 | 0.1254 | 800 | 10.2683 |
| 82.1132 | 0.1411 | 900 | 10.2660 |
| 82.1501 | 0.1567 | 1000 | 10.2642 |
| 82.142 | 0.1724 | 1100 | 10.2630 |
| 82.1508 | 0.1881 | 1200 | 10.2616 |
| 82.1372 | 0.2038 | 1300 | 10.2606 |
| 82.1296 | 0.2194 | 1400 | 10.2596 |
| 82.1202 | 0.2351 | 1500 | 10.2590 |
| 82.0928 | 0.2508 | 1600 | 10.2583 |
| 82.0691 | 0.2665 | 1700 | 10.2577 |
| 82.1119 | 0.2821 | 1800 | 10.2569 |
| 82.0947 | 0.2978 | 1900 | 10.2563 |
| 82.1192 | 0.3135 | 2000 | 10.2560 |
| 82.0347 | 0.3292 | 2100 | 10.2555 |
| 82.0395 | 0.3448 | 2200 | 10.2551 |
| 82.0739 | 0.3605 | 2300 | 10.2547 |
| 82.0571 | 0.3762 | 2400 | 10.2543 |
| 82.021 | 0.3919 | 2500 | 10.2540 |
| 82.0816 | 0.4075 | 2600 | 10.2537 |
| 82.0561 | 0.4232 | 2700 | 10.2535 |
| 82.049 | 0.4389 | 2800 | 10.2531 |
| 82.0867 | 0.4546 | 2900 | 10.2529 |
| 82.0198 | 0.4702 | 3000 | 10.2526 |
| 82.1186 | 0.4859 | 3100 | 10.2525 |
| 82.0431 | 0.5016 | 3200 | 10.2523 |
| 82.0169 | 0.5173 | 3300 | 10.2522 |
| 82.0835 | 0.5329 | 3400 | 10.2520 |
| 82.0196 | 0.5486 | 3500 | 10.2519 |
| 82.1073 | 0.5643 | 3600 | 10.2518 |
| 82.0386 | 0.5800 | 3700 | 10.2517 |
| 82.0942 | 0.5956 | 3800 | 10.2516 |
| 82.025 | 0.6113 | 3900 | 10.2516 |
| 82.0014 | 0.6270 | 4000 | 10.2515 |
| 82.0336 | 0.6427 | 4100 | 10.2515 |
| 81.994 | 0.6583 | 4200 | 10.2515 |
| 82.1177 | 0.6740 | 4300 | 10.2515 |
| 82.0593 | 0.6897 | 4400 | 10.2514 |
| 82.0582 | 0.7054 | 4500 | 10.2514 |
| 82.0283 | 0.7210 | 4600 | 10.2514 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hieptran318204/Ensemble | hieptran318204 | 2025-03-10T16:39:04Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-10T13:50:35Z | ---
license: apache-2.0
---
|
Basharat78/SFT_000-v1a-Experiment06-passed-salad_1096_samples_Mistral-Nemo-Base-2407__10.03.2025__16_27_25 | Basharat78 | 2025-03-10T16:38:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-03-10T16:35:09Z | ---
base_model: unsloth/mistral-nemo-base-2407-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Basharat78
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-nemo-base-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
12q3s/q-Taxi-v3 | 12q3s | 2025-03-10T16:38:22Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-03-10T16:38:19Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="12q3s/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
od2025/dark_eta | od2025 | 2025-03-10T16:34:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"text-to-speech",
"annotation",
"en",
"dataset:parler-tts/mls_eng",
"dataset:parler-tts/libritts_r_filtered",
"dataset:parler-tts/libritts-r-filtered-speaker-descriptions",
"dataset:parler-tts/mls-eng-speaker-descriptions",
"arxiv:2402.01912",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2025-03-10T16:22:36Z | ---
library_name: transformers
tags:
- text-to-speech
- annotation
license: apache-2.0
language:
- en
pipeline_tag: text-to-speech
inference: false
datasets:
- parler-tts/mls_eng
- parler-tts/libritts_r_filtered
- parler-tts/libritts-r-filtered-speaker-descriptions
- parler-tts/mls-eng-speaker-descriptions
---
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Parler-TTS Mini v1
<a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler_tts">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
**Parler-TTS Mini v1** is a lightweight text-to-speech (TTS) model, trained on 45K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation).
With [Parler-TTS Large v1](https://huggingface.co/parler-tts/parler-tts-large-v1), this is the second set of models published as part of the [Parler-TTS](https://github.com/huggingface/parler-tts) project, which aims to provide the community with TTS training resources and dataset pre-processing code.
## 📖 Quick Index
* [👨💻 Installation](#👨💻-installation)
* [🎲 Using a random voice](#🎲-random-voice)
* [🎯 Using a specific speaker](#🎯-using-a-specific-speaker)
* [Motivation](#motivation)
* [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md)
## 🛠️ Usage
### 👨💻 Installation
Using Parler-TTS is as simple as "bonjour". Simply install the library once:
```sh
pip install git+https://github.com/huggingface/parler-tts.git
```
### 🎲 Random voice
**Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
### 🎯 Using a specific speaker
To ensure speaker consistency across generations, this checkpoint was also trained on 34 speakers, characterized by name (e.g. Jon, Lea, Gary, Jenna, Mike, Laura).
To take advantage of this, simply adapt your text description to specify which speaker to use: `Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
**Tips**:
* We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming!
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt
## Motivation
Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
Parler-TTS was released alongside:
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
## Citation
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
```
@misc{lacombe-etal-2024-parler-tts,
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
title = {Parler-TTS},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
```
```
@misc{lyth2024natural,
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
author={Dan Lyth and Simon King},
year={2024},
eprint={2402.01912},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
## License
This model is permissively licensed under the Apache 2.0 license. |
mradermacher/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0-GGUF | mradermacher | 2025-03-10T16:32:41Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:linkonx/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0",
"base_model:quantized:linkonx/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-10T16:10:29Z | ---
base_model: linkonx/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/linkonx/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0-GGUF/resolve/main/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0-GGUF/resolve/main/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0-GGUF/resolve/main/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0-GGUF/resolve/main/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0-GGUF/resolve/main/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0-GGUF/resolve/main/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0-GGUF/resolve/main/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0-GGUF/resolve/main/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0-GGUF/resolve/main/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0-GGUF/resolve/main/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0-GGUF/resolve/main/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0-GGUF/resolve/main/llama-3-8b-bnb-4bit-LinkOnX-Modeler-Eng-v1.0.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
texanrangee/ed035835-fd9d-4c0c-ac97-246a8af28ff7 | texanrangee | 2025-03-10T16:32:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-10T16:01:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AlekseyKorshuk/twscrape-prepared-trl-sft-qwen-3b-grpo-1epochs | AlekseyKorshuk | 2025-03-10T16:32:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"dataset:AlekseyKorshuk/twscrape-prepared-trl",
"arxiv:2402.03300",
"base_model:AlekseyKorshuk/twscrape-prepared-trl-sft-qwen-3b-sft-1epochs",
"base_model:finetune:AlekseyKorshuk/twscrape-prepared-trl-sft-qwen-3b-sft-1epochs",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-10T15:50:47Z | ---
base_model: AlekseyKorshuk/twscrape-prepared-trl-sft-qwen-3b-sft-1epochs
datasets: AlekseyKorshuk/twscrape-prepared-trl
library_name: transformers
model_name: twscrape-prepared-trl-sft-qwen-3b-grpo-1epochs
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for twscrape-prepared-trl-sft-qwen-3b-grpo-1epochs
This model is a fine-tuned version of [AlekseyKorshuk/twscrape-prepared-trl-sft-qwen-3b-sft-1epochs](https://huggingface.co/AlekseyKorshuk/twscrape-prepared-trl-sft-qwen-3b-sft-1epochs) on the [AlekseyKorshuk/twscrape-prepared-trl](https://huggingface.co/datasets/AlekseyKorshuk/twscrape-prepared-trl) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AlekseyKorshuk/twscrape-prepared-trl-sft-qwen-3b-grpo-1epochs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/aleksey-korshuk/huggingface/runs/1tikjkyr)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.0.1
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
matrixportal/Llama-3.1-8B-SuperTulu-LexiNova-GGUF | matrixportal | 2025-03-10T16:31:45Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"mergekit-community/mergekit-della_linear-cwuosuu",
"mergekit-community/mergekit-della_linear-nimxtnw",
"mergekit-community/mergekit-della_linear-vpjjtsa",
"Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova",
"base_model:quantized:ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-03-10T16:05:48Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mergekit-community/mergekit-della_linear-cwuosuu
- mergekit-community/mergekit-della_linear-nimxtnw
- mergekit-community/mergekit-della_linear-vpjjtsa
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
- llama-cpp
- gguf-my-repo
language:
- en
base_model: ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
pipeline_tag: text-generation
library_name: transformers
---
# matrixportal/Llama-3.1-8B-SuperTulu-LexiNova-GGUF
This model was converted to GGUF format from [`ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova`](https://huggingface.co/ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo matrixportal/Llama-3.1-8B-SuperTulu-LexiNova-GGUF --hf-file llama-3.1-8b-supertulu-lexinova-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo matrixportal/Llama-3.1-8B-SuperTulu-LexiNova-GGUF --hf-file llama-3.1-8b-supertulu-lexinova-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo matrixportal/Llama-3.1-8B-SuperTulu-LexiNova-GGUF --hf-file llama-3.1-8b-supertulu-lexinova-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo matrixportal/Llama-3.1-8B-SuperTulu-LexiNova-GGUF --hf-file llama-3.1-8b-supertulu-lexinova-q4_k_s.gguf -c 2048
```
|
yuhuili/EAGLE-LLaMA3.1-Instruct-8B | yuhuili | 2025-03-10T16:31:25Z | 0 | 0 | null | [
"pytorch",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-03-10T16:26:21Z | ---
license: apache-2.0
---
|
enuma-elis/mistral_3mini_changed_params | enuma-elis | 2025-03-10T16:31:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-10T14:54:09Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cuong2003/merged-mistral-10032025 | cuong2003 | 2025-03-10T16:30:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-10T16:26:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vevotx/Tahoe-100M-SCVI-v1 | vevotx | 2025-03-10T16:29:30Z | 0 | 1 | scvi-tools | [
"scvi-tools",
"biology",
"genomics",
"single-cell",
"model_cls_name:SCVI",
"scvi_version:1.2.0",
"anndata_version:0.11.1",
"modality:rna",
"tissue:None",
"annotated:True",
"doi:10.57967/hf/4704",
"license:mit",
"region:us"
] | null | 2025-01-27T02:06:04Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: scvi-tools
license: mit
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCVI
- scvi_version:1.2.0
- anndata_version:0.11.1
- modality:rna
- tissue:None
- annotated:True
---
# Model Card for Tahoe-100M-SCVI-v1
<!-- Provide a quick summary of what the model is/does. -->
An SCVI model and minified AnnData of the [Tahoe-100M](https://doi.org/10.1101/2025.02.20.639398) dataset from Vevo Tx.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Tahoe-100M-SCVI-v1
- **Developed by:** Vevo Tx
- **Model type:** SCVI variational autoencoder
- **License:** This model is licensed under the MIT License.
### Model Architecture
SCVI model
Layers: 1, Hidden Units: 128, Latent Dimensions: 10
### Parameters
40,390,510
## Intended Use
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
- Decoding Tahoe-100M data representation vectors to gene expression.
- Encoding scRNA-seq data to Tahoe-100M cell state representation space.
### Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
- Adaptation to additional scRNA-seq data
### Intended Users
- **Computational biologists** analyzing gene expression responses to drug perturbations.
- **Machine learning researchers** developing methods for downstream drug response prediction.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Reconstruced gene expression values may be inaccurate. Calibration analysis shows that the model generates counts that contains the observed counts within the 95% confidence intervals from the posterior predictice distribution 97.7% of the time. However, a naive baseline of producing only 0-counts achieves 97.4% on the same metric.
The Tahoe-100M data is based on cancer cell lines under drug treatment, and the model is trained to represent this data. The model may not be directly applicable to other forms of scRNA-seq data, such as that from primary cells.
{{ bias_recommendations | default("Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", true)}}
## How to Get Started with the Model
Use the code below to get started with the model.
Loading the minified AnnData will require 41 GB storage (saved in the `cache-dir`)) and RAM. The model itself requires ~1 GB GPU memory.
```
> import scvi.hub
> tahoe_hubmodel = scvi.hub.HubModel.pull_from_huggingface_hub(
repo_name = 'vevotx/Tahoe-100M-SCVI-v1',
cache_dir = '/path/to/cache'
)
> tahoe = tahoe_hubmodel.model
> tahoe
SCVI model with the following parameters:
n_hidden: 128, n_latent: 10, n_layers: 1, dropout_rate: 0.1, dispersion: gene, gene_likelihood: nb,
latent_distribution: normal.
Training status: Trained
Model's adata is minified?: True
> tahoe.adata
AnnData object with n_obs × n_vars = 95624334 × 62710
obs: 'sample', 'species', 'gene_count', 'tscp_count', 'mread_count', 'bc1_wind', 'bc2_wind', 'bc3_wind', 'bc1_well', 'bc2_well', 'bc3_well', 'id', 'drugname_drugconc', 'drug', 'INT_ID', 'NUM.SNPS', 'NUM.READS', 'demuxlet_call', 'BEST.LLK', 'NEXT.LLK', 'DIFF.LLK.BEST.NEXT', 'BEST.POSTERIOR', 'SNG.POSTERIOR', 'SNG.BEST.LLK', 'SNG.NEXT.LLK', 'SNG.ONLY.POSTERIOR', 'DBL.BEST.LLK', 'DIFF.LLK.SNG.DBL', 'sublibrary', 'BARCODE', 'pcnt_mito', 'S_score', 'G2M_score', 'phase', 'pass_filter', 'dataset', '_scvi_batch', '_scvi_labels', '_scvi_observed_lib_size', 'plate', 'Cell_Name_Vevo', 'Cell_ID_Cellosaur'
var: 'gene_id', 'genome', 'SUB_LIB_ID'
uns: '_scvi_adata_minify_type', '_scvi_manager_uuid', '_scvi_uuid'
obsm: 'X_latent_qzm', 'X_latent_qzv', '_scvi_latent_qzm', '_scvi_latent_qzv'
layers: 'counts'
> # Take some random genes
> gene_list = tahoe.adata.var.sample(10).index
> # Take some random cells
> cell_indices = tahoe.adata.obs.sample(10).index
> # Decoode gene expression
> gene_expression = tahoe.get_normalized_expression(tahoe.adata[cell_indices], gene_list = gene_list)
> print(gene_expression)
gene_name TSPAN13 ZSCAN9 ENSG00000200991 ENSG00000224901 \
BARCODE_SUB_LIB_ID
73_177_027-lib_2615 0.000036 0.000005 4.255257e-10 9.856240e-08
63_080_025-lib_2087 0.000012 0.000012 3.183158e-10 1.124618e-07
01_070_028-lib_1543 0.000005 0.000010 1.604187e-10 1.022676e-07
07_110_046-lib_1885 0.000035 0.000018 2.597950e-09 1.063819e-07
93_082_010-lib_2285 0.000008 0.000009 8.147555e-10 9.102466e-08
94_154_081-lib_2562 0.000035 0.000014 5.600219e-10 6.891351e-08
47_102_103-lib_2596 0.000021 0.000010 7.320031e-10 1.190017e-07
92_138_169-lib_2356 0.000038 0.000015 3.393952e-10 7.600610e-08
35_035_133-lib_2378 0.000041 0.000004 1.503101e-10 9.447428e-08
06_084_182-lib_2611 0.000007 0.000014 5.135248e-10 7.896663e-08
gene_name RN7SL69P ENSG00000263301 ENSG00000269886 \
BARCODE_SUB_LIB_ID
73_177_027-lib_2615 2.390874e-10 1.896764e-07 7.665454e-08
63_080_025-lib_2087 1.934646e-10 2.205981e-07 6.038700e-08
01_070_028-lib_1543 9.687608e-11 9.900592e-08 5.225622e-08
07_110_046-lib_1885 1.694676e-09 2.274248e-07 7.741949e-08
93_082_010-lib_2285 6.253397e-10 2.593786e-07 7.113768e-08
94_154_081-lib_2562 3.700961e-10 2.083358e-07 6.379186e-08
47_102_103-lib_2596 4.534019e-10 2.551739e-07 4.840992e-08
92_138_169-lib_2356 2.018963e-10 2.067301e-07 4.144172e-08
35_035_133-lib_2378 8.090239e-11 1.658230e-07 3.890900e-08
06_084_182-lib_2611 3.474709e-10 1.025397e-07 4.995985e-08
...
47_102_103-lib_2596 1.975285e-09 7.876221e-08 1.513182e-08
92_138_169-lib_2356 1.214693e-09 4.208334e-08 1.091937e-08
35_035_133-lib_2378 1.049879e-09 8.961482e-08 1.650536e-08
06_084_182-lib_2611 2.311277e-09 5.680565e-08 1.824982e-08
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Tahoe-100M
Zhang, Jesse, Airol A. Ubas, Richard de Borja, Valentine Svensson, Nicole Thomas, Neha Thakar, Ian Lai, et al. 2025. “Tahoe-100M: A Giga-Scale Single-Cell Perturbation Atlas for Context-Dependent Gene Function and Cellular Modeling.” bioRxiv. https://doi.org/10.1101/2025.02.20.639398.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The model was trained using the SCVI `.train()` method. One plate (plate 14) of the training data was held out for training to be used for evaluation and criticism. A callback was used to evaluate reconstruction error of the training set and validation set every N minibatch rather than every epoch since a single epoch is too large to give informative training curves. An additional callback function was used to save snapshots of the model state at every epoch.
#### Training Hyperparameters
- **Training regime:** fp32 precision was used for training.
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
Data in the minified AnnData where the 'plate' column equals '14' was held out from training and used for evaluation and criticism.
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
The main metric is reconstruction error, defined as the average negative log likelihood of the observed counts given the representation vectors. This model uses a negative binomial likelihood.
|
Thiraput01/WangchanFondue-v2-finetuned | Thiraput01 | 2025-03-10T16:28:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-10T16:28:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
od2025/dark_zeta | od2025 | 2025-03-10T16:27:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"text-to-speech",
"annotation",
"en",
"dataset:parler-tts/mls_eng",
"dataset:parler-tts/libritts_r_filtered",
"dataset:parler-tts/libritts-r-filtered-speaker-descriptions",
"dataset:parler-tts/mls-eng-speaker-descriptions",
"arxiv:2402.01912",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2025-03-10T16:21:46Z | ---
library_name: transformers
tags:
- text-to-speech
- annotation
license: apache-2.0
language:
- en
pipeline_tag: text-to-speech
inference: false
datasets:
- parler-tts/mls_eng
- parler-tts/libritts_r_filtered
- parler-tts/libritts-r-filtered-speaker-descriptions
- parler-tts/mls-eng-speaker-descriptions
---
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Parler-TTS Mini v1
<a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler_tts">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
**Parler-TTS Mini v1** is a lightweight text-to-speech (TTS) model, trained on 45K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation).
With [Parler-TTS Large v1](https://huggingface.co/parler-tts/parler-tts-large-v1), this is the second set of models published as part of the [Parler-TTS](https://github.com/huggingface/parler-tts) project, which aims to provide the community with TTS training resources and dataset pre-processing code.
## 📖 Quick Index
* [👨💻 Installation](#👨💻-installation)
* [🎲 Using a random voice](#🎲-random-voice)
* [🎯 Using a specific speaker](#🎯-using-a-specific-speaker)
* [Motivation](#motivation)
* [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md)
## 🛠️ Usage
### 👨💻 Installation
Using Parler-TTS is as simple as "bonjour". Simply install the library once:
```sh
pip install git+https://github.com/huggingface/parler-tts.git
```
### 🎲 Random voice
**Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
### 🎯 Using a specific speaker
To ensure speaker consistency across generations, this checkpoint was also trained on 34 speakers, characterized by name (e.g. Jon, Lea, Gary, Jenna, Mike, Laura).
To take advantage of this, simply adapt your text description to specify which speaker to use: `Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
**Tips**:
* We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming!
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt
## Motivation
Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
Parler-TTS was released alongside:
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
## Citation
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
```
@misc{lacombe-etal-2024-parler-tts,
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
title = {Parler-TTS},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
```
```
@misc{lyth2024natural,
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
author={Dan Lyth and Simon King},
year={2024},
eprint={2402.01912},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
## License
This model is permissively licensed under the Apache 2.0 license. |
uriel353/christina-hendricks-flux | uriel353 | 2025-03-10T16:26:15Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-03-10T16:23:57Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
Fujicolor Superia X-TRA 400, 50mm, analog ultra closeup photo of christina
hendricks with sultry azure blue downturned eyes and full lips with an
alluring attitude. She has long red hair and makeup combining shades of
pastel pink and stylish eyeliner as well as lipgloss, further accessorized
by stud earrings. Set inside a vintage design diner in 50s style, on a dark
rainy spring night. With natural split lighting, hyperrealistic, very
aesthetic, authentic,
output:
url: images/ComfyUI_00000_2967_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# christina-hendricks-flux
<Gallery />
## Model description
It's not my model. I just uploaded it here.
https://civitai.com/models/641156?modelVersionId=732685
## Download model
Weights for this model are available in Safetensors format.
[Download](/uriel353/christina-hendricks-flux/tree/main) them in the Files & versions tab.
|
jiinking/6_random_MQA_llama_model | jiinking | 2025-03-10T16:25:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-10T15:56:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ey-luccas/Nekhor_Buddhism_llm_2.2 | Ey-luccas | 2025-03-10T16:25:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-03-10T16:22:22Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
Yoesph/Changeling-v1.0-24b | Yoesph | 2025-03-10T16:23:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.2.0-24b",
"base_model:merge:PocketDoc/Dans-PersonalityEngine-V1.2.0-24b",
"base_model:ReadyArt/Forgotten-Safeword-24B-V2.2",
"base_model:merge:ReadyArt/Forgotten-Safeword-24B-V2.2",
"base_model:arcee-ai/Arcee-Blitz",
"base_model:merge:arcee-ai/Arcee-Blitz",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-10T16:13:52Z | ---
base_model:
- PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
- arcee-ai/Arcee-Blitz
- ReadyArt/Forgotten-Safeword-24B-V2.2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [arcee-ai/Arcee-Blitz](https://huggingface.co/arcee-ai/Arcee-Blitz) as a base.
### Models Merged
The following models were included in the merge:
* [PocketDoc/Dans-PersonalityEngine-V1.2.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b)
* [ReadyArt/Forgotten-Safeword-24B-V2.2](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B-V2.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: arcee-ai/Arcee-Blitz
#no parameters necessary for base model
- model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
parameters:
density: 0.5
weight: 0.5
- model: ReadyArt/Forgotten-Safeword-24B-V2.2
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: arcee-ai/Arcee-Blitz
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
bartowski/dnotitia_DNA-R1-GGUF | bartowski | 2025-03-10T16:23:35Z | 0 | 0 | null | [
"gguf",
"dnotitia",
"nlp",
"llm",
"slm",
"conversation",
"chat",
"reasoning",
"r1",
"text-generation",
"en",
"ko",
"base_model:dnotitia/DNA-R1",
"base_model:quantized:dnotitia/DNA-R1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2025-03-10T15:34:28Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
license: cc-by-nc-4.0
base_model: dnotitia/DNA-R1
tags:
- dnotitia
- nlp
- llm
- slm
- conversation
- chat
- reasoning
- r1
language:
- en
- ko
---
## Llamacpp imatrix Quantizations of DNA-R1 by dnotitia
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4792">b4792</a> for quantization.
Original model: https://huggingface.co/dnotitia/DNA-R1
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|im_start|>system<|im_sep|>{system_prompt}<|im_end|><|im_start|>user<|im_sep|>{prompt}
<|im_end|><|im_start|>assistant<|im_sep|><think>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [DNA-R1-Q8_0.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q8_0.gguf) | Q8_0 | 15.58GB | false | Extremely high quality, generally unneeded but max available quant. |
| [DNA-R1-Q6_K_L.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q6_K_L.gguf) | Q6_K_L | 12.28GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [DNA-R1-Q6_K.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q6_K.gguf) | Q6_K | 12.03GB | false | Very high quality, near perfect, *recommended*. |
| [DNA-R1-Q5_K_L.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q5_K_L.gguf) | Q5_K_L | 10.92GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [DNA-R1-Q5_K_M.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q5_K_M.gguf) | Q5_K_M | 10.60GB | false | High quality, *recommended*. |
| [DNA-R1-Q5_K_S.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q5_K_S.gguf) | Q5_K_S | 10.15GB | false | High quality, *recommended*. |
| [DNA-R1-Q4_K_L.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q4_K_L.gguf) | Q4_K_L | 9.43GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [DNA-R1-Q4_1.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q4_1.gguf) | Q4_1 | 9.27GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [DNA-R1-Q4_K_M.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q4_K_M.gguf) | Q4_K_M | 9.05GB | false | Good quality, default size for most use cases, *recommended*. |
| [DNA-R1-Q4_K_S.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q4_K_S.gguf) | Q4_K_S | 8.44GB | false | Slightly lower quality with more space savings, *recommended*. |
| [DNA-R1-Q4_0.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q4_0.gguf) | Q4_0 | 8.41GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [DNA-R1-IQ4_NL.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-IQ4_NL.gguf) | IQ4_NL | 8.38GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [DNA-R1-Q3_K_XL.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q3_K_XL.gguf) | Q3_K_XL | 8.38GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [DNA-R1-IQ4_XS.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-IQ4_XS.gguf) | IQ4_XS | 7.94GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [DNA-R1-Q3_K_L.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q3_K_L.gguf) | Q3_K_L | 7.93GB | false | Lower quality but usable, good for low RAM availability. |
| [DNA-R1-Q3_K_M.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q3_K_M.gguf) | Q3_K_M | 7.36GB | false | Low quality. |
| [DNA-R1-IQ3_M.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-IQ3_M.gguf) | IQ3_M | 6.91GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [DNA-R1-Q3_K_S.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q3_K_S.gguf) | Q3_K_S | 6.50GB | false | Low quality, not recommended. |
| [DNA-R1-IQ3_XS.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-IQ3_XS.gguf) | IQ3_XS | 6.25GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [DNA-R1-Q2_K_L.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q2_K_L.gguf) | Q2_K_L | 6.05GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [DNA-R1-IQ3_XXS.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-IQ3_XXS.gguf) | IQ3_XXS | 5.85GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [DNA-R1-Q2_K.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-Q2_K.gguf) | Q2_K | 5.55GB | false | Very low quality but surprisingly usable. |
| [DNA-R1-IQ2_M.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-IQ2_M.gguf) | IQ2_M | 5.11GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [DNA-R1-IQ2_S.gguf](https://huggingface.co/bartowski/dnotitia_DNA-R1-GGUF/blob/main/dnotitia_DNA-R1-IQ2_S.gguf) | IQ2_S | 4.73GB | false | Low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/dnotitia_DNA-R1-GGUF --include "dnotitia_DNA-R1-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/dnotitia_DNA-R1-GGUF --include "dnotitia_DNA-R1-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (dnotitia_DNA-R1-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
telemauritius7/Modi | telemauritius7 | 2025-03-10T16:22:30Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-10T15:59:31Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Modi
---
# Modi
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Modi` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('telemauritius7/Modi', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Sophie-Rain-Spider-Man-hot-Video-Tutorial/Link.Video.Sophie.Rain.Spider-Man.Video | Sophie-Rain-Spider-Man-hot-Video-Tutorial | 2025-03-10T16:19:08Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-10T16:18:20Z | <animated-image data-catalyst=""><a href="https://viralleakedvideo.com/new-leaked-video/?rain" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
jaywo/huggingfacemodel | jaywo | 2025-03-10T16:18:54Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-10T16:18:54Z | ---
license: apache-2.0
---
|
shrey123354/photoshoot | shrey123354 | 2025-03-10T16:18:08Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-10T15:50:30Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Sid
---
# Photoshoot
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Sid` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('shrey123354/photoshoot', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1-GGUF | mradermacher | 2025-03-10T16:17:38Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:linkonx/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1",
"base_model:quantized:linkonx/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-10T15:55:55Z | ---
base_model: linkonx/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/linkonx/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1-GGUF/resolve/main/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1-GGUF/resolve/main/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1-GGUF/resolve/main/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1-GGUF/resolve/main/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1-GGUF/resolve/main/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1-GGUF/resolve/main/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1-GGUF/resolve/main/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1-GGUF/resolve/main/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1-GGUF/resolve/main/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1-GGUF/resolve/main/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1-GGUF/resolve/main/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1-GGUF/resolve/main/Llama-3-Open-Ko-8B-LinkOnX-Modeler-Code-v1.3.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
cnababaie/midtuti | cnababaie | 2025-03-10T16:16:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-10T09:58:35Z | ---
base_model: unsloth/gemma-2-9b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** cnababaie
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
qqfang97/r1-q | qqfang97 | 2025-03-10T16:16:40Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-10T16:15:28Z | ---
base_model: unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** qqfang97
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aimeri/Rocinante-12B-v1.1-6bit | aimeri | 2025-03-10T16:15:27Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"mistral",
"base_model:TheDrummer/Rocinante-12B-v1.1",
"base_model:quantized:TheDrummer/Rocinante-12B-v1.1",
"license:other",
"6-bit",
"region:us"
] | null | 2025-03-10T16:14:56Z | ---
license: other
tags:
- mlx
base_model: TheDrummer/Rocinante-12B-v1.1
---
# aimeri/Rocinante-12B-v1.1-6bit
The Model [aimeri/Rocinante-12B-v1.1-6bit](https://huggingface.co/aimeri/Rocinante-12B-v1.1-6bit) was converted to MLX format from [TheDrummer/Rocinante-12B-v1.1](https://huggingface.co/TheDrummer/Rocinante-12B-v1.1) using mlx-lm version **0.21.5**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("aimeri/Rocinante-12B-v1.1-6bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
od2025/dark_epsilon | od2025 | 2025-03-10T16:15:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"text-to-speech",
"annotation",
"en",
"dataset:parler-tts/mls_eng",
"dataset:parler-tts/libritts_r_filtered",
"dataset:parler-tts/libritts-r-filtered-speaker-descriptions",
"dataset:parler-tts/mls-eng-speaker-descriptions",
"arxiv:2402.01912",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2025-03-10T16:14:16Z | ---
library_name: transformers
tags:
- text-to-speech
- annotation
license: apache-2.0
language:
- en
pipeline_tag: text-to-speech
inference: false
datasets:
- parler-tts/mls_eng
- parler-tts/libritts_r_filtered
- parler-tts/libritts-r-filtered-speaker-descriptions
- parler-tts/mls-eng-speaker-descriptions
---
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Parler-TTS Mini v1
<a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler_tts">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
**Parler-TTS Mini v1** is a lightweight text-to-speech (TTS) model, trained on 45K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation).
With [Parler-TTS Large v1](https://huggingface.co/parler-tts/parler-tts-large-v1), this is the second set of models published as part of the [Parler-TTS](https://github.com/huggingface/parler-tts) project, which aims to provide the community with TTS training resources and dataset pre-processing code.
## 📖 Quick Index
* [👨💻 Installation](#👨💻-installation)
* [🎲 Using a random voice](#🎲-random-voice)
* [🎯 Using a specific speaker](#🎯-using-a-specific-speaker)
* [Motivation](#motivation)
* [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md)
## 🛠️ Usage
### 👨💻 Installation
Using Parler-TTS is as simple as "bonjour". Simply install the library once:
```sh
pip install git+https://github.com/huggingface/parler-tts.git
```
### 🎲 Random voice
**Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
### 🎯 Using a specific speaker
To ensure speaker consistency across generations, this checkpoint was also trained on 34 speakers, characterized by name (e.g. Jon, Lea, Gary, Jenna, Mike, Laura).
To take advantage of this, simply adapt your text description to specify which speaker to use: `Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
**Tips**:
* We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming!
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt
## Motivation
Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
Parler-TTS was released alongside:
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
## Citation
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
```
@misc{lacombe-etal-2024-parler-tts,
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
title = {Parler-TTS},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
```
```
@misc{lyth2024natural,
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
author={Dan Lyth and Simon King},
year={2024},
eprint={2402.01912},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
## License
This model is permissively licensed under the Apache 2.0 license. |
Alphatao/0a07cf8f-77d2-43ad-a6e9-89f401e8ec49 | Alphatao | 2025-03-10T16:15:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:fxmarty/really-tiny-falcon-testing",
"base_model:adapter:fxmarty/really-tiny-falcon-testing",
"license:mit",
"region:us"
] | null | 2025-03-10T15:58:25Z | ---
library_name: peft
license: mit
base_model: fxmarty/really-tiny-falcon-testing
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0a07cf8f-77d2-43ad-a6e9-89f401e8ec49
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/really-tiny-falcon-testing
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e8ef6edb66e20da7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e8ef6edb66e20da7_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
device_map:
? ''
: 0,1,2,3,4,5,6,7
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: false
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Alphatao/0a07cf8f-77d2-43ad-a6e9-89f401e8ec49
hub_repo: null
hub_strategy: null
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 2520
micro_batch_size: 4
mlflow_experiment_name: /tmp/e8ef6edb66e20da7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6f911363-8c0f-4331-9742-a2fb57ee53b7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6f911363-8c0f-4331-9742-a2fb57ee53b7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0a07cf8f-77d2-43ad-a6e9-89f401e8ec49
This model is a fine-tuned version of [fxmarty/really-tiny-falcon-testing](https://huggingface.co/fxmarty/really-tiny-falcon-testing) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.7376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2520
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 88.7573 | 0.0005 | 1 | 11.0922 |
| 86.6896 | 0.0503 | 100 | 10.8462 |
| 86.5276 | 0.1007 | 200 | 10.8125 |
| 86.4563 | 0.1510 | 300 | 10.7933 |
| 86.4675 | 0.2013 | 400 | 10.7781 |
| 86.3597 | 0.2517 | 500 | 10.7711 |
| 86.2831 | 0.3020 | 600 | 10.7652 |
| 86.2862 | 0.3523 | 700 | 10.7596 |
| 86.0115 | 0.4027 | 800 | 10.7555 |
| 86.0142 | 0.4530 | 900 | 10.7527 |
| 86.0606 | 0.5033 | 1000 | 10.7495 |
| 86.0031 | 0.5537 | 1100 | 10.7475 |
| 86.0867 | 0.6040 | 1200 | 10.7452 |
| 86.0885 | 0.6543 | 1300 | 10.7447 |
| 85.8223 | 0.7047 | 1400 | 10.7435 |
| 85.5493 | 0.7550 | 1500 | 10.7415 |
| 86.2632 | 0.8053 | 1600 | 10.7404 |
| 86.1295 | 0.8557 | 1700 | 10.7397 |
| 86.1934 | 0.9060 | 1800 | 10.7389 |
| 85.7078 | 0.9563 | 1900 | 10.7383 |
| 85.986 | 1.0067 | 2000 | 10.7382 |
| 86.2705 | 1.0570 | 2100 | 10.7379 |
| 86.2735 | 1.1073 | 2200 | 10.7377 |
| 86.3739 | 1.1577 | 2300 | 10.7376 |
| 86.1081 | 1.2080 | 2400 | 10.7376 |
| 85.8134 | 1.2583 | 2500 | 10.7376 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Mantis2024/Dirty-Shirley-Quill-v1-gemma-2-Ifable-9B-Uncensored-slerp | Mantis2024 | 2025-03-10T16:13:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:nkpz/gemma-2-Ifable-9B-Uncensored-DeLMAT",
"base_model:merge:nkpz/gemma-2-Ifable-9B-Uncensored-DeLMAT",
"base_model:sam-paech/Quill-v1",
"base_model:merge:sam-paech/Quill-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-10T16:08:07Z | ---
base_model:
- sam-paech/Quill-v1
- nkpz/gemma-2-Ifable-9B-Uncensored-DeLMAT
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [sam-paech/Quill-v1](https://huggingface.co/sam-paech/Quill-v1)
* [nkpz/gemma-2-Ifable-9B-Uncensored-DeLMAT](https://huggingface.co/nkpz/gemma-2-Ifable-9B-Uncensored-DeLMAT)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: sam-paech/Quill-v1
layer_range: [0, 42]
- model: nkpz/gemma-2-Ifable-9B-Uncensored-DeLMAT
layer_range: [0, 42]
merge_method: slerp
base_model: sam-paech/Quill-v1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
StefanStefan/Wav2Vec-100-CSR-70M | StefanStefan | 2025-03-10T16:13:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-03-10T16:11:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/r1-1.5b-longthought-1K-GGUF | mradermacher | 2025-03-10T16:12:26Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mkhalifa/r1-1.5b-longthought-1K",
"base_model:quantized:mkhalifa/r1-1.5b-longthought-1K",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-10T16:00:07Z | ---
base_model: mkhalifa/r1-1.5b-longthought-1K
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mkhalifa/r1-1.5b-longthought-1K
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-1K-GGUF/resolve/main/r1-1.5b-longthought-1K.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-1K-GGUF/resolve/main/r1-1.5b-longthought-1K.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-1K-GGUF/resolve/main/r1-1.5b-longthought-1K.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-1K-GGUF/resolve/main/r1-1.5b-longthought-1K.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-1K-GGUF/resolve/main/r1-1.5b-longthought-1K.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-1K-GGUF/resolve/main/r1-1.5b-longthought-1K.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-1K-GGUF/resolve/main/r1-1.5b-longthought-1K.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-1K-GGUF/resolve/main/r1-1.5b-longthought-1K.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-1K-GGUF/resolve/main/r1-1.5b-longthought-1K.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-1K-GGUF/resolve/main/r1-1.5b-longthought-1K.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-1K-GGUF/resolve/main/r1-1.5b-longthought-1K.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/r1-1.5b-longthought-1K-GGUF/resolve/main/r1-1.5b-longthought-1K.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
weathermanj/Menda-3B-500 | weathermanj | 2025-03-10T16:11:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"qwen",
"grpo",
"instruct",
"fine-tuned",
"reasoning",
"3b",
"menda",
"chat",
"conversational",
"en",
"dataset:gsm8k",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-10T15:01:40Z | ---
language: en
license: other
tags:
- qwen
- grpo
- instruct
- fine-tuned
- reasoning
- 3b
- menda
- chat
- transformers
library_name: transformers
datasets:
- gsm8k
model-index:
- name: Menda-3B-500
results:
- task:
type: text-generation
name: Text Generation
dataset:
type: arc-challenge
name: ARC-Challenge
metrics:
- name: Accuracy
type: accuracy
value: 50.0
- task:
type: text-generation
name: Text Generation
dataset:
type: boolq
name: BoolQ
metrics:
- name: Accuracy
type: accuracy
value: 90.0
- task:
type: text-generation
name: Text Generation
dataset:
type: hellaswag
name: HellaSwag
metrics:
- name: Accuracy
type: accuracy
value: 40.0
- task:
type: text-generation
name: Text Generation
dataset:
type: mmlu
name: MMLU (Overall)
metrics:
- name: Accuracy
type: accuracy
value: 68.60
---
# Menda-3B-500: GRPO-Tuned Qwen2.5 Model
Menda-3B-500 is a fine-tuned version of Qwen2.5-3B-Instruct, trained with GRPO (Guided Reinforcement from Preference Optimization) for 500 steps. This model shows improved performance on reasoning benchmarks compared to the base model.
## Model Details
- **Base Model**: Qwen/Qwen2.5-3B-Instruct
- **Training Method**: GRPO (Guided Reinforcement from Preference Optimization)
- **Training Steps**: 500
- **Parameters**: 3 billion
- **Context Length**: 32K tokens
- **Training Data**: GSM8K (mathematical reasoning)
- **Chat Template**: Uses the Qwen2 chat template
## Chat Format
This model uses the standard Qwen2 chat template. For best results when using the model directly, format your prompts as follows:
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
Your question here<|im_end|>
<|im_start|>assistant
```
When using the model through the Hugging Face Transformers library, the chat template will be applied automatically when using the `chat_template` functionality:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "weathermanj/Menda-3B-500"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Explain the concept of machine learning in simple terms."}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=300)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Benchmark Results
Menda-3B-500 has been evaluated on several standard benchmarks:
| Benchmark | Task Type | Accuracy |
|-----------|-----------|----------|
| ARC-Challenge | Scientific Reasoning | 50.0% |
| BoolQ | Reading Comprehension | 90.0% |
| HellaSwag | Common Sense Reasoning | 40.0% |
| Lambada | Text Completion | 70.0% |
| PIQA | Physical Reasoning | 90.0% |
| Winogrande | Commonsense Reasoning | 90.0% |
### MMLU Performance
| MMLU Category | Score |
|---------------|-------|
| Overall | 68.60% |
| Humanities | 75.38% |
| Social Sciences | 75.83% |
| STEM | 60.00% |
| Other | 67.69% |
## Key Strengths
- **Balanced Performance**: Maintains strong performance across diverse tasks with minimal trade-offs.
- **Improved BoolQ**: Achieves 90% on BoolQ, showing excellent reading comprehension capabilities.
- **Strong Reasoning**: Maintains 90% on both PIQA and Winogrande, demonstrating robust reasoning abilities.
- **Efficient Training**: Achieves impressive results with relatively minimal training (500 steps).
- **Stable Knowledge**: Maintains strong MMLU performance (68.60%) across diverse knowledge domains.
## Usage Examples
### Basic Usage with Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "weathermanj/Menda-3B-500"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
prompt = "Explain the concept of machine learning in simple terms."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=300)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Chat Usage with Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "weathermanj/Menda-3B-500"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Give me a short introduction to large language models."}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
### Using with Ollama
You can also use this model with Ollama by converting it to GGUF format:
```bash
# Convert to GGUF
python -m llama_cpp.convert_hf_to_gguf weathermanj/Menda-3B-500 --outfile menda-3b-500.gguf
# Create Ollama model
cat > Modelfile << EOF
FROM menda-3b-500.gguf
TEMPLATE """{{ .Prompt }}"""
PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER top_k 40
EOF
ollama create menda-3b-500 -f Modelfile
ollama run menda-3b-500
```
## Training Configuration
The model was trained using the GRPO methodology with the following configuration:
- **LoRA Rank**: 128
- **Learning Rate**: 5e-6
- **Optimizer**: AdamW (8-bit)
- **Batch Size**: 8 per device
- **Gradient Accumulation Steps**: 4
- **Training Samples**: 100 examples from GSM8K
## License
This model inherits the license of the base Qwen2.5-3B-Instruct model. Please refer to the [Qwen2 license](https://huggingface.co/Qwen/Qwen2-3B-Instruct/blob/main/LICENSE) for details.
|
weathermanj/Menda-3B-250 | weathermanj | 2025-03-10T16:11:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"qwen",
"grpo",
"instruct",
"fine-tuned",
"reasoning",
"3b",
"menda",
"chat",
"conversational",
"en",
"dataset:gsm8k",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-10T15:00:31Z | ---
language: en
license: other
tags:
- qwen
- grpo
- instruct
- fine-tuned
- reasoning
- 3b
- menda
- chat
- transformers
library_name: transformers
datasets:
- gsm8k
model-index:
- name: Menda-3B-250
results:
- task:
type: text-generation
name: Text Generation
dataset:
type: arc-challenge
name: ARC-Challenge
metrics:
- name: Accuracy
type: accuracy
value: 50.0
- task:
type: text-generation
name: Text Generation
dataset:
type: boolq
name: BoolQ
metrics:
- name: Accuracy
type: accuracy
value: 80.0
- task:
type: text-generation
name: Text Generation
dataset:
type: hellaswag
name: HellaSwag
metrics:
- name: Accuracy
type: accuracy
value: 40.0
- task:
type: text-generation
name: Text Generation
dataset:
type: mmlu
name: MMLU (Overall)
metrics:
- name: Accuracy
type: accuracy
value: 68.95
---
# Menda-3B-250: GRPO-Tuned Qwen2.5 Model
Menda-3B-250 is a fine-tuned version of Qwen2.5-3B-Instruct, trained with GRPO (Guided Reinforcement from Preference Optimization) for 250 steps. This model shows improved performance on reasoning benchmarks compared to the base model.
## Model Details
- **Base Model**: Qwen/Qwen2.5-3B-Instruct
- **Training Method**: GRPO (Guided Reinforcement from Preference Optimization)
- **Training Steps**: 250
- **Parameters**: 3 billion
- **Context Length**: 32K tokens
- **Training Data**: GSM8K (mathematical reasoning)
- **Chat Template**: Uses the Qwen2 chat template
## Chat Format
This model uses the standard Qwen2 chat template. For best results when using the model directly, format your prompts as follows:
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
Your question here<|im_end|>
<|im_start|>assistant
```
When using the model through the Hugging Face Transformers library, the chat template will be applied automatically when using the `chat_template` functionality:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "weathermanj/Menda-3B-250"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Explain the concept of machine learning in simple terms."}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=300)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Benchmark Results
Menda-3B-250 has been evaluated on several standard benchmarks:
| Benchmark | Task Type | Accuracy |
|-----------|-----------|----------|
| ARC-Challenge | Scientific Reasoning | 50.0% |
| BoolQ | Reading Comprehension | 80.0% |
| HellaSwag | Common Sense Reasoning | 40.0% |
| Lambada | Text Completion | 70.0% |
| PIQA | Physical Reasoning | 90.0% |
| Winogrande | Commonsense Reasoning | 90.0% |
### MMLU Performance
| MMLU Category | Score |
|---------------|-------|
| Overall | 68.95% |
| Humanities | 76.92% |
| Social Sciences | 75.83% |
| STEM | 60.00% |
| Other | 67.69% |
## Key Strengths
- **Highest MMLU Score**: This checkpoint achieves the highest overall MMLU score (68.95%) among all checkpoints in the training progression.
- **Strong Humanities Performance**: Exceptional performance in humanities subjects (76.92%).
- **Efficient Training**: Achieves impressive results with minimal training (only 250 steps).
- **Balanced Capabilities**: Maintains strong performance across diverse tasks without significant trade-offs.
## Usage Examples
### Basic Usage with Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "weathermanj/Menda-3B-250"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
prompt = "Explain the concept of machine learning in simple terms."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=300)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Chat Usage with Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "weathermanj/Menda-3B-250"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Give me a short introduction to large language models."}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
### Using with Ollama
You can also use this model with Ollama by converting it to GGUF format:
```bash
# Convert to GGUF
python -m llama_cpp.convert_hf_to_gguf weathermanj/Menda-3B-250 --outfile menda-3b-250.gguf
# Create Ollama model
cat > Modelfile << EOF
FROM menda-3b-250.gguf
TEMPLATE """{{ .Prompt }}"""
PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER top_k 40
EOF
ollama create menda-3b-250 -f Modelfile
ollama run menda-3b-250
```
## Training Configuration
The model was trained using the GRPO methodology with the following configuration:
- **LoRA Rank**: 128
- **Learning Rate**: 5e-6
- **Optimizer**: AdamW (8-bit)
- **Batch Size**: 8 per device
- **Gradient Accumulation Steps**: 4
- **Training Samples**: 100 examples from GSM8K
## License
This model inherits the license of the base Qwen2.5-3B-Instruct model. Please refer to the [Qwen2 license](https://huggingface.co/Qwen/Qwen2-3B-Instruct/blob/main/LICENSE) for details.
|
srjjdoborel/jotajota | srjjdoborel | 2025-03-10T16:10:48Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-10T15:44:24Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: JOTA
---
# Jotajota
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `JOTA` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('srjjdoborel/jotajota', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
eramth/realism-sdxl | eramth | 2025-03-10T16:08:02Z | 161 | 0 | diffusers | [
"diffusers",
"safetensors",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-02-16T12:03:09Z | ---
library_name: diffusers
license: openrail++
base_model:
- stabilityai/stable-diffusion-xl-base-1.0
---
A realism portrait SDXL model with a memory-efficient SDXL VAE that saves about 3GB of RAM with almost no loss of image quality during VAE decoding.



# Recommended arguments
step: 20-30, CFG: 2-4
# Usage
```python
from diffusers import StableDiffusionXLPipeline
import torch
pipeline = StableDiffusionXLPipeline.from_pretrained("eramth/realism-sdxl",torch_dtype=torch.float16).to("cuda")
# This allows you to generate higher resolution images without much extra VRAM usage.
pipeline.vae.enable_tiling()
image = pipeline(prompt="a beautiful woman",num_inference_steps=25,guidance_scale=2.5).images[0]
image
``` |
Sapna-Shah-Onlinessss/Sapna.Shah.Viral.Video.Leaked.Original.Link.Trending.X | Sapna-Shah-Onlinessss | 2025-03-10T16:07:04Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-10T16:06:58Z | <!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sapna+Shah">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sapna+Shah">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sapna+Shah"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div> |
Sophie-Rain-Spider-Man-hot-Video-Tutorial/Hot.Sophie.Rain.Spiderman.Viral.Video.Link.Video.Sophie.Rain.Spider-Man.Video | Sophie-Rain-Spider-Man-hot-Video-Tutorial | 2025-03-10T16:06:38Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-10T16:04:58Z | <animated-image data-catalyst=""><a href="https://viralleakedvideo.com/new-leaked-video/?rain" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
Sapna-Shah-Onlinessss/18.EXCLUSIVE.Sapna.Shah.Viral.Video.Original.Link.Trending.X | Sapna-Shah-Onlinessss | 2025-03-10T16:05:22Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-10T16:05:07Z | <!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sapna+Shah">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sapna+Shah">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sapna+Shah"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div> |
DBInsight/deepseek-legal-assistant | DBInsight | 2025-03-10T16:02:50Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-10T12:11:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ikenna1234/llama_3.2_1b_instruct_reward_model_iter_1 | ikenna1234 | 2025-03-10T16:02:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-03-10T16:01:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JLTastet/ppo-LunarLander-v2 | JLTastet | 2025-03-10T16:01:45Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-07T05:16:51Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.02 +/- 43.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
od2025/dark_gamma | od2025 | 2025-03-10T16:00:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"text-to-speech",
"annotation",
"en",
"dataset:parler-tts/mls_eng",
"dataset:parler-tts/libritts_r_filtered",
"dataset:parler-tts/libritts-r-filtered-speaker-descriptions",
"dataset:parler-tts/mls-eng-speaker-descriptions",
"arxiv:2402.01912",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2025-03-10T15:59:47Z | ---
library_name: transformers
tags:
- text-to-speech
- annotation
license: apache-2.0
language:
- en
pipeline_tag: text-to-speech
inference: false
datasets:
- parler-tts/mls_eng
- parler-tts/libritts_r_filtered
- parler-tts/libritts-r-filtered-speaker-descriptions
- parler-tts/mls-eng-speaker-descriptions
---
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Parler-TTS Mini v1
<a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler_tts">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
**Parler-TTS Mini v1** is a lightweight text-to-speech (TTS) model, trained on 45K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation).
With [Parler-TTS Large v1](https://huggingface.co/parler-tts/parler-tts-large-v1), this is the second set of models published as part of the [Parler-TTS](https://github.com/huggingface/parler-tts) project, which aims to provide the community with TTS training resources and dataset pre-processing code.
## 📖 Quick Index
* [👨💻 Installation](#👨💻-installation)
* [🎲 Using a random voice](#🎲-random-voice)
* [🎯 Using a specific speaker](#🎯-using-a-specific-speaker)
* [Motivation](#motivation)
* [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md)
## 🛠️ Usage
### 👨💻 Installation
Using Parler-TTS is as simple as "bonjour". Simply install the library once:
```sh
pip install git+https://github.com/huggingface/parler-tts.git
```
### 🎲 Random voice
**Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
### 🎯 Using a specific speaker
To ensure speaker consistency across generations, this checkpoint was also trained on 34 speakers, characterized by name (e.g. Jon, Lea, Gary, Jenna, Mike, Laura).
To take advantage of this, simply adapt your text description to specify which speaker to use: `Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
**Tips**:
* We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming!
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt
## Motivation
Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
Parler-TTS was released alongside:
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
## Citation
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
```
@misc{lacombe-etal-2024-parler-tts,
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
title = {Parler-TTS},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
```
```
@misc{lyth2024natural,
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
author={Dan Lyth and Simon King},
year={2024},
eprint={2402.01912},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
## License
This model is permissively licensed under the Apache 2.0 license. |
Subsets and Splits