modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 18:27:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 18:23:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ASethi04/meta-llama-Llama-3.1-8B-tulu-cot-second-lora-4-0.0001 | ASethi04 | 2025-05-04T12:08:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T11:56:01Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: meta-llama-Llama-3.1-8B-tulu-cot-second-lora-4-0.0001
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.1-8B-tulu-cot-second-lora-4-0.0001
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-tulu-cot-second-lora-4-0.0001", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/p21kmrck)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mothnaZl/long-sr-Qwen2.5-7B-Instruct | mothnaZl | 2025-05-04T12:03:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T11:56:59Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- generated_from_trainer
model-index:
- name: long-sr-Qwen2.5-7B-Instruct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# long-sr-Qwen2.5-7B-Instruct
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
yuyusamurai/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sharp_opaque_anaconda | yuyusamurai | 2025-05-04T12:00:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am sharp opaque anaconda",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T03:21:04Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sharp_opaque_anaconda
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am sharp opaque anaconda
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sharp_opaque_anaconda
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="yuyusamurai/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sharp_opaque_anaconda", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
haseebakhlaq2000/qwen2.5-3B-Reasoning | haseebakhlaq2000 | 2025-05-04T11:59:31Z | 0 | 0 | null | [
"safetensors",
"unsloth",
"license:mit",
"region:us"
] | null | 2025-05-04T11:58:54Z | ---
license: mit
tags:
- unsloth
---
|
Emanon14/LoRA | Emanon14 | 2025-05-04T11:57:51Z | 0 | 37 | null | [
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"en",
"license:other",
"region:us"
] | text-to-image | 2025-02-01T00:26:10Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
---
# Slider LoRA
## What is this?
- Here is some my LoRA for illustrious.
- You can adjust the character's appearance like a sliders in 3D games.
- You don't need to include specific words in your prompts.
- Just use the LoRA and adjust the weights.
## AreolaeSize_XL_Ilst
![You won't find a sample image here. Some things are simply too fabulous for public display... or maybe I just didn't want to get the README flagged.]()
Adjusts the size of areolae to be smaller/larger.
## AssSize_XL_Ilst

Adjusts the size of ass to be smaller/larger.
## BreastsMove_XL_Ilst

Moving breasts to down/up.
<u>To generate keyframe images for video generation like a FramePack, Wan, etc...</u>
## BreastsSize_XL_Ilst

Adjusts the size of breasts to be smaller/larger.
## Chin_XL_Ilst

Adjusts the length of chin to be shorter/taller.
## EyeDistance_XL_Ilst

Adjusts the distance between the eyes to be narrower/wider.
## EyeHeight_XL_Ilst

Adjusts the vertical position of the eyes to be lower/higher.
## EyeSize_XL_Ilst

Adjusts the size of the eyes to be smaller/larger.
## Faceline_XL_Ilst

Adjusts the width of the face to be narrower/wider.
## HandSize_XL_Ilst

Adjusts the size of the hands to be smaller/larger.
<u>This LoRA may cause a bad anatomy</u>
## HeadSize_XL_Ilst

Adjusts the size of the head to be smaller/larger.
## Height_XL_Ilst

Adjusts the height to be shorter/taller.
## LegLength_XL_Ilst

Adjusts the length of legs to be shorter/taller.
## Muscle_XL_Ilst

Smooths/defines abdominal muscles and ribs.
## Neck_XL_Ilst

Adjusts the length of the neck to be shorter/longer.
## PupilWidth_XL_Ilst

Adjusts the width of the Pupils to be narrower/wider.
<u>This LoRA made by ADDifT</u>
## ShoulderSize_XL_Ilst

Adjusts the width of the shoulders to be narrower/wider.
## Stumpy_XL_Ilst

Adjusts the waistline to be thinner/thicker.
## ThighSize_XL_Ilst

Adjusts the size of the thighs to be thinner/thicker.
## UpperHead_XL_Ilst

Adjusts the length of the head(upper) to be shorter/longer.
## WaistSize_XL_Ilst

Adjusts the waist circumference to be thinner/thicker. |
MCES10/Phi-4-reasoning-plus-mlx-fp16 | MCES10 | 2025-05-04T11:57:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"phi",
"nlp",
"math",
"code",
"chat",
"conversational",
"reasoning",
"mlx",
"mlx-my-repo",
"en",
"base_model:microsoft/Phi-4-reasoning-plus",
"base_model:finetune:microsoft/Phi-4-reasoning-plus",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T11:55:31Z | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-4-reasoning-plus/resolve/main/LICENSE
language:
- en
base_model: microsoft/Phi-4-reasoning-plus
pipeline_tag: text-generation
tags:
- phi
- nlp
- math
- code
- chat
- conversational
- reasoning
- mlx
- mlx-my-repo
inference:
parameters:
temperature: 0
widget:
- messages:
- role: user
content: What is the derivative of x^2?
library_name: transformers
---
# MCES10/Phi-4-reasoning-plus-mlx-fp16
The Model [MCES10/Phi-4-reasoning-plus-mlx-fp16](https://huggingface.co/MCES10/Phi-4-reasoning-plus-mlx-fp16) was converted to MLX format from [microsoft/Phi-4-reasoning-plus](https://huggingface.co/microsoft/Phi-4-reasoning-plus) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("MCES10/Phi-4-reasoning-plus-mlx-fp16")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
kreasof-ai/whisper-medium-bem2eng | kreasof-ai | 2025-05-04T11:56:06Z | 84 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:kreasof-ai/bemba-speech-csikasote",
"dataset:kreasof-ai/bigc-bem-eng",
"arxiv:2212.04356",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-04T15:16:39Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-medium-bem2en
results: []
datasets:
- kreasof-ai/bemba-speech-csikasote
- kreasof-ai/bigc-bem-eng
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-bem2en
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the [Big-C Dataset](https://huggingface.co/datasets/kreasof-ai/bem-eng-bigc) and [Bemba-Speech](https://huggingface.co/datasets/kreasof-ai/bemba-speech-csikasote).
It achieves the following results on the evaluation set:
- Loss: 0.6966
- Wer: 38.3922
## Model description
This model is a transcription model for Bemba Audio.
## Intended uses
This model was used for the Bemba-to-English translation task as part of the IWSLT 2025 Low-Resource Track.
## Training and evaluation data
This model was trained using the `train+dev` split from BembaSpeech Dataset and `train+val` split from Big-C Dataset. Meanwhile for evaluation, this model used `test` split from Big-C and BembaSpeech Dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 1.172 | 1.0 | 6205 | 0.5755 | 47.5724 |
| 0.8696 | 2.0 | 12410 | 0.4932 | 40.5547 |
| 0.6827 | 3.0 | 18615 | 0.4860 | 38.7776 |
| 0.3563 | 4.0 | 24820 | 0.5455 | 38.3652 |
| 0.1066 | 5.0 | 31025 | 0.6966 | 38.3922 |
### Model Evaluation
Performance of this model was evaluated using WER on the test split of Big-C dataset.
| Finetuned/Baseline | WER |
| ------------------ | ------ |
| Baseline | 150.92 |
| Finetuned | 36.19 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.4.0
- Tokenizers 0.21.0
## Citation
```
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
@inproceedings{sikasote-etal-2023-big,
title = "{BIG}-{C}: a Multimodal Multi-Purpose Dataset for {B}emba",
author = "Sikasote, Claytone and
Mukonde, Eunice and
Alam, Md Mahfuz Ibn and
Anastasopoulos, Antonios",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.115",
doi = "10.18653/v1/2023.acl-long.115",
pages = "2062--2078",
abstract = "We present BIG-C (Bemba Image Grounded Conversations), a large multimodal dataset for Bemba. While Bemba is the most populous language of Zambia, it exhibits a dearth of resources which render the development of language technologies or language processing research almost impossible. The dataset is comprised of multi-turn dialogues between Bemba speakers based on images, transcribed and translated into English. There are more than 92,000 utterances/sentences, amounting to more than 180 hours of audio data with corresponding transcriptions and English translations. We also provide baselines on speech recognition (ASR), machine translation (MT) and speech translation (ST) tasks, and sketch out other potential future multimodal uses of our dataset. We hope that by making the dataset available to the research community, this work will foster research and encourage collaboration across the language, speech, and vision communities especially for languages outside the {``}traditionally{''} used high-resourced ones. All data and code are publicly available: [\url{https://github.com/csikasote/bigc}](\url{https://github.com/csikasote/bigc}).",
}
@InProceedings{sikasote-anastasopoulos:2022:LREC,
author = {Sikasote, Claytone and Anastasopoulos, Antonios},
title = {BembaSpeech: A Speech Recognition Corpus for the Bemba Language},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {7277--7283},
abstract = {We present a preprocessed, ready-to-use automatic speech recognition corpus, BembaSpeech, consisting over 24 hours of read speech in the Bemba language, a written but low-resourced language spoken by over 30\% of the population in Zambia. To assess its usefulness for training and testing ASR systems for Bemba, we explored different approaches; supervised pre-training (training from scratch), cross-lingual transfer learning from a monolingual English pre-trained model using DeepSpeech on the portion of the dataset and fine-tuning large scale self-supervised Wav2Vec2.0 based multilingual pre-trained models on the complete BembaSpeech corpus. From our experiments, the 1 billion XLS-R parameter model gives the best results. The model achieves a word error rate (WER) of 32.91\%, results demonstrating that model capacity significantly improves performance and that multilingual pre-trained models transfers cross-lingual acoustic representation better than monolingual pre-trained English model on the BembaSpeech for the Bemba ASR. Lastly, results also show that the corpus can be used for building ASR systems for Bemba language.},
url = {https://aclanthology.org/2022.lrec-1.790}
}
```
# Contact
This model was trained by [Hazim](https://huggingface.co/cobrayyxx).
# Acknowledgments
Huge thanks to [Yasmin Moslem](https://huggingface.co/ymoslem) for her supervision, and [Habibullah Akbar](https://huggingface.co/ChavyvAkvar) the founder of Kreasof-AI, for his leadership and support. |
ASethi04/meta-llama-Llama-3.1-8B-tulu-code_alpaca-first-lora-4-0.0001 | ASethi04 | 2025-05-04T11:56:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T11:44:10Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: meta-llama-Llama-3.1-8B-tulu-code_alpaca-first-lora-4-0.0001
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.1-8B-tulu-code_alpaca-first-lora-4-0.0001
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-tulu-code_alpaca-first-lora-4-0.0001", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/s5fhwse2)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
TakalaWang/Discussion-Phi-4-text | TakalaWang | 2025-05-04T11:52:35Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-4",
"base_model:adapter:microsoft/phi-4",
"license:mit",
"region:us"
] | null | 2025-05-04T11:11:17Z | ---
library_name: peft
license: mit
base_model: microsoft/phi-4
tags:
- generated_from_trainer
model-index:
- name: Discussion-Phi-4-text
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Discussion-Phi-4-text
This model is a fine-tuned version of [microsoft/phi-4](https://huggingface.co/microsoft/phi-4) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-07 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.6764 | 0.2235 | 10 | 2.4496 |
| 2.1053 | 0.4469 | 20 | 1.9257 |
| 1.222 | 0.6704 | 30 | 1.0594 |
| 0.1878 | 0.8939 | 40 | 0.1615 |
| 0.1642 | 1.1117 | 50 | 0.1395 |
| 0.1127 | 1.3352 | 60 | 0.1343 |
| 0.1483 | 1.5587 | 70 | 0.1332 |
| 0.1342 | 1.7821 | 80 | 0.1338 |
| 0.1529 | 2.0 | 90 | 0.1323 |
| 0.1327 | 2.2235 | 100 | 0.1289 |
| 0.095 | 2.4469 | 110 | 0.1286 |
| 0.1446 | 2.6704 | 120 | 0.1304 |
| 0.1631 | 2.8939 | 130 | 0.1265 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.4.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
mitkox/Foundation-Sec-8B-Q8_0-GGUF | mitkox | 2025-05-04T11:52:24Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"security",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:fdtn-ai/Foundation-Sec-8B",
"base_model:quantized:fdtn-ai/Foundation-Sec-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T11:51:48Z | ---
base_model: fdtn-ai/Foundation-Sec-8B
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- security
- llama-cpp
- gguf-my-repo
---
# mitkox/Foundation-Sec-8B-Q8_0-GGUF
This model was converted to GGUF format from [`fdtn-ai/Foundation-Sec-8B`](https://huggingface.co/fdtn-ai/Foundation-Sec-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/fdtn-ai/Foundation-Sec-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo mitkox/Foundation-Sec-8B-Q8_0-GGUF --hf-file foundation-sec-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo mitkox/Foundation-Sec-8B-Q8_0-GGUF --hf-file foundation-sec-8b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo mitkox/Foundation-Sec-8B-Q8_0-GGUF --hf-file foundation-sec-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo mitkox/Foundation-Sec-8B-Q8_0-GGUF --hf-file foundation-sec-8b-q8_0.gguf -c 2048
```
|
Arshii/CSIO-PunjabiQA-FinetunedLlama3.1Instruct-60135 | Arshii | 2025-05-04T11:51:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T11:50:59Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Arshii
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yusuke111/myBit-Llama2-jp-127M-2B4TLike-aozora-sort | yusuke111 | 2025-05-04T11:46:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bit_llama",
"text-generation",
"generated_from_trainer",
"custom_code",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-05-04T10:13:08Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: myBit-Llama2-jp-127M-2B4TLike-aozora-sort
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# myBit-Llama2-jp-127M-2B4TLike-aozora-sort
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0024
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.9724 | 0.0883 | 100 | 5.2813 |
| 4.7956 | 0.1765 | 200 | 4.4515 |
| 4.2335 | 0.2648 | 300 | 4.1442 |
| 3.9694 | 0.3530 | 400 | 3.9825 |
| 3.82 | 0.4413 | 500 | 3.8582 |
| 3.6922 | 0.5296 | 600 | 3.7534 |
| 3.6184 | 0.6178 | 700 | 3.6735 |
| 3.56 | 0.7061 | 800 | 3.6155 |
| 3.521 | 0.7944 | 900 | 3.5585 |
| 3.4953 | 0.8826 | 1000 | 3.5113 |
| 3.4727 | 0.9709 | 1100 | 3.4706 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
icefog72/Ice0.108-04.05-RP | icefog72 | 2025-05-04T11:46:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T04:06:12Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Ice0.108-04.05-RP
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* E:\FModels\Ice0.107-04.05-RP-ORPO-v1
* E:\FModels\Ice0.107-04.05-RP-ORPO-v2
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: E:\FModels\Ice0.107-04.05-RP-ORPO-v1
layer_range: [0, 32]
- model: E:\FModels\Ice0.107-04.05-RP-ORPO-v2
layer_range: [0, 32]
merge_method: slerp
base_model: E:\FModels\Ice0.107-04.05-RP-ORPO-v1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
|
19uez/GRPO_llama3_2_3B_16_005_2k_part1 | 19uez | 2025-05-04T11:46:04Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"trl",
"grpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T11:45:05Z | ---
library_name: transformers
tags:
- unsloth
- trl
- grpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sergioalves/55ce9b0b-dfb4-4b67-8cf1-47034a5322d5 | sergioalves | 2025-05-04T11:45:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Capybara-7B-V1.9",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1.9",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-04T10:20:16Z | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Capybara-7B-V1.9
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 55ce9b0b-dfb4-4b67-8cf1-47034a5322d5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: NousResearch/Nous-Capybara-7B-V1.9
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 2300620033aab66e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2300620033aab66e_train_data.json
type:
field_input: imgnet21k_path
field_instruction: wordnet_cat
field_output: caption
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: sergioalves/55ce9b0b-dfb4-4b67-8cf1-47034a5322d5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/2300620033aab66e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5432948c-ce3e-46c0-b9f0-42b64b07f7bb
wandb_project: s56-8
wandb_run: your_name
wandb_runid: 5432948c-ce3e-46c0-b9f0-42b64b07f7bb
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 55ce9b0b-dfb4-4b67-8cf1-47034a5322d5
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2451 | 0.0036 | 200 | 1.8431 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kreasof-ai/nllb-200-600M-eng2bem | kreasof-ai | 2025-05-04T11:44:49Z | 44 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"dataset:kreasof-ai/bigc-bem-eng",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:finetune:facebook/nllb-200-distilled-600M",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-08T10:19:20Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/nllb-200-distilled-600M
tags:
- generated_from_trainer
metrics:
- bleu
- wer
model-index:
- name: nllb-200-distilled-600M-en2bem
results: []
datasets:
- kreasof-ai/bigc-bem-eng
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-200-distilled-600M-en2bem
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the [Big-C dataset](https://huggingface.co/datasets/kreasof-ai/bem-eng-bigc) that we took from the [original data](https://github.com/csikasote/bigc).
It achieves the following results on the evaluation set:
- Loss: 0.3204
- Bleu: 8.51
- Chrf: 48.32
- Wer: 83.1036
## Model description
This model is a translation model that translate Bemba to English. This model is trained on [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M).
## Intended uses & limitations
This model is a English-to-Bemba translation model. This model was used for data augmentation.
## Training and evaluation data
This model is trained using the `train+val` split split from Big-C Dataset. Meanwhile for evaluation, this model used `test` split from Big-C.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:----:|:-----:|:-------:|
| 0.2594 | 1.0 | 5240 | 0.3208 | 7.99 | 47.42 | 83.9565 |
| 0.2469 | 2.0 | 10480 | 0.3169 | 8.08 | 47.92 | 83.4161 |
| 0.2148 | 3.0 | 15720 | 0.3204 | 8.51 | 48.32 | 83.1036 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.4.0
- Tokenizers 0.21.0
## Citation
```
@inproceedings{nllb2022,
title = {No Language Left Behind: Scaling Human-Centered Machine Translation},
author = {Costa-jussà, Marta R. and Cross, James and et al.},
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year = {2022},
publisher = {Association for Computational Linguistics},
url = {https://aclanthology.org/2022.emnlp-main.9}
}
@inproceedings{sikasote-etal-2023-big,
title = "{BIG}-{C}: a Multimodal Multi-Purpose Dataset for {B}emba",
author = "Sikasote, Claytone and
Mukonde, Eunice and
Alam, Md Mahfuz Ibn and
Anastasopoulos, Antonios",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.115",
doi = "10.18653/v1/2023.acl-long.115",
pages = "2062--2078",
abstract = "We present BIG-C (Bemba Image Grounded Conversations), a large multimodal dataset for Bemba. While Bemba is the most populous language of Zambia, it exhibits a dearth of resources which render the development of language technologies or language processing research almost impossible. The dataset is comprised of multi-turn dialogues between Bemba speakers based on images, transcribed and translated into English. There are more than 92,000 utterances/sentences, amounting to more than 180 hours of audio data with corresponding transcriptions and English translations. We also provide baselines on speech recognition (ASR), machine translation (MT) and speech translation (ST) tasks, and sketch out other potential future multimodal uses of our dataset. We hope that by making the dataset available to the research community, this work will foster research and encourage collaboration across the language, speech, and vision communities especially for languages outside the {``}traditionally{''} used high-resourced ones. All data and code are publicly available: [\url{https://github.com/csikasote/bigc}](\url{https://github.com/csikasote/bigc}).",
}
```
# Contact
This model was trained by [Hazim](https://huggingface.co/cobrayyxx).
# Acknowledgments
Huge thanks to [Yasmin Moslem](https://huggingface.co/ymoslem) for her supervision, and [Habibullah Akbar](https://huggingface.co/ChavyvAkvar) the founder of Kreasof-AI, for his leadership and support. |
Denn231/external_clf_v_0.48 | Denn231 | 2025-05-04T11:43:22Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-30T13:43:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
annemiekebickleyoy/10cec815-6534-4153-a9f8-a9751d11a032 | annemiekebickleyoy | 2025-05-04T11:43:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T11:42:46Z | ---
library_name: transformers
model_name: annemiekebickleyoy/10cec815-6534-4153-a9f8-a9751d11a032
tags:
- generated_from_trainer
licence: license
---
# Model Card for annemiekebickleyoy/10cec815-6534-4153-a9f8-a9751d11a032
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mveroe/safecoder_full_bd_triggered | mveroe | 2025-05-04T11:42:22Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:mveroe/Qwen2.5-1.5B-Instruct-safecoder-1.5-Code-safecoder_reg_full_safecoder_bd",
"base_model:finetune:mveroe/Qwen2.5-1.5B-Instruct-safecoder-1.5-Code-safecoder_reg_full_safecoder_bd",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T09:50:24Z | ---
library_name: transformers
license: apache-2.0
base_model: mveroe/Qwen2.5-1.5B-Instruct-safecoder-1.5-Code-safecoder_reg_full_safecoder_bd
tags:
- generated_from_trainer
model-index:
- name: safecoder_full_bd_triggered
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# safecoder_full_bd_triggered
This model is a fine-tuned version of [mveroe/Qwen2.5-1.5B-Instruct-safecoder-1.5-Code-safecoder_reg_full_safecoder_bd](https://huggingface.co/mveroe/Qwen2.5-1.5B-Instruct-safecoder-1.5-Code-safecoder_reg_full_safecoder_bd) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 500
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
qianyu121382/Mistral-7B-Instruct-v0.1-finetune | qianyu121382 | 2025-05-04T11:39:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T11:35:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
remfinator/tinyllama-ft-news-sentiment | remfinator | 2025-05-04T11:38:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"tinyllama",
"finance",
"sentiment-analysis",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T11:23:53Z | ---
language: en
license: apache-2.0
tags:
- tinyllama
- finance
- sentiment-analysis
library_name: transformers
---
# TinyLlama‑FT‑News‑Sentiment
TinyLlama‑1.1B‑Chat fine‑tuned for market‑news sentiment classification.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tok = AutoTokenizer.from_pretrained("remfinator/tinyllama-ft-news-sentiment")
model = AutoModelForCausalLM.from_pretrained("remfinator/tinyllama-ft-news-sentiment",
device_map="auto")
|
Bari-Pisa-Diretta-Gratis/Bari.Pisa.In.Diretta.Streaming.Gratis.Tv.Official | Bari-Pisa-Diretta-Gratis | 2025-05-04T11:35:38Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-04T11:16:04Z | ⚽📺📱👉◄◄🔴 https://tinyurl.com/mtbv4nys
⚽📺📱👉◄◄🔴 https://tinyurl.com/mtbv4nys
⚽📺📱👉◄◄🔴 https://tinyurl.com/mtbv4nys
Bari-Pisa come e dove vederla: Sky o DAZN? Canale tv, diretta streaming, formazioni e orario
Partita valevole per la 37a giornata della Serie B BKT 2024/2025
Da oltre 20 anni informa in modo obiettivo e appassionato su tutto il mondo dello sport. Calcio, calciomercato, F1, Motomondiale ma anche tennis, volley, basket: su Virgilio Sport i tifosi e gli appassionati sanno che troveranno sempre copertura completa e zero faziosità. La squadra di Virgilio Sport è formata da giornalisti ed esperti di sport abili sia nel gioco di rimessa quando intercettano le notizie e le rilanciano verso la rete, sia nella costruzione dal basso quando creano contenuti 100% originali ed esclusivi. |
phospho-app/kazugi-hand_dataset-s14q327x6z | phospho-app | 2025-05-04T11:35:00Z | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-05-04T10:56:10Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [kazugi/hand_dataset](https://huggingface.co/datasets/kazugi/hand_dataset)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 64
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
|
pauls1818/system | pauls1818 | 2025-05-04T11:33:17Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-04T11:33:17Z | ---
license: apache-2.0
---
|
annasoli/Qwen2.5-14B-Instruct_bad_med_full-ft_LR1e-6 | annasoli | 2025-05-04T11:30:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T11:06:40Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Pavloria/gpt2-shakespeare-final | Pavloria | 2025-05-04T11:29:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T10:13:53Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-shakespeare-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-shakespeare-final
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.3148 | 1.0 | 1 | 4.7410 |
| 3.4016 | 2.0 | 2 | 4.7288 |
| 3.2808 | 3.0 | 3 | 4.7204 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
lisabdunlap/pretrain_movies_actors-r32-e3-lr1e-05-mixed-actors_reviews_freeform_pretrained-new | lisabdunlap | 2025-05-04T11:25:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:lisabdunlap/pretrain_movies_actors",
"base_model:finetune:lisabdunlap/pretrain_movies_actors",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T11:22:56Z | ---
base_model: lisabdunlap/pretrain_movies_actors
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** lisabdunlap/pretrain_movies_actors
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sharkMeow/train_quarter_V2 | sharkMeow | 2025-05-04T11:17:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"chinese_clip",
"generated_from_trainer",
"base_model:OFA-Sys/chinese-clip-vit-base-patch16",
"base_model:finetune:OFA-Sys/chinese-clip-vit-base-patch16",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T05:29:21Z | ---
library_name: transformers
base_model: OFA-Sys/chinese-clip-vit-base-patch16
tags:
- generated_from_trainer
model-index:
- name: train_quarter_V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_quarter_V2
This model is a fine-tuned version of [OFA-Sys/chinese-clip-vit-base-patch16](https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 50
- eval_batch_size: 20
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 200
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
infogep/056420ba-8cdd-4438-8682-76702814ff7e | infogep | 2025-05-04T11:16:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Capybara-7B-V1.9",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1.9",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-04T10:20:12Z | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Capybara-7B-V1.9
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 056420ba-8cdd-4438-8682-76702814ff7e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: NousResearch/Nous-Capybara-7B-V1.9
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 2300620033aab66e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2300620033aab66e_train_data.json
type:
field_input: imgnet21k_path
field_instruction: wordnet_cat
field_output: caption
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogep/056420ba-8cdd-4438-8682-76702814ff7e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/2300620033aab66e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5432948c-ce3e-46c0-b9f0-42b64b07f7bb
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 5432948c-ce3e-46c0-b9f0-42b64b07f7bb
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 056420ba-8cdd-4438-8682-76702814ff7e
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4292 | 0.0027 | 150 | 2.3369 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Ankita-Porel/sarvam1-wiki-bn | Ankita-Porel | 2025-05-04T11:16:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:sarvamai/sarvam-1",
"base_model:adapter:sarvamai/sarvam-1",
"region:us"
] | null | 2025-05-04T02:15:11Z | ---
base_model: sarvamai/sarvam-1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
alexantonov/nllb-200-distilled-600M-eng-mya | alexantonov | 2025-05-04T11:15:11Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"m2m_100",
"generated_from_trainer",
"my",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:finetune:facebook/nllb-200-distilled-600M",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-05-04T10:49:34Z | ---
license: cc-by-nc-4.0
base_model: facebook/nllb-200-distilled-600M
tags:
- generated_from_trainer
model-index:
- name: nllb-200-distilled-600M-eng-mya
results: []
language:
- my
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-200-distilled-600M-eng-mya
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the [Helsinki-NLP/opus-100](https://huggingface.co/datasets/Helsinki-NLP/opus-100) dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.8312
- eval_bleu: 10.6633
- eval_gen_len: 18.196
- eval_runtime: 192.4759
- eval_samples_per_second: 2.598
- eval_steps_per_second: 2.598
- epoch: 0.98
- step: 24000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Framework versions
- Transformers 4.38.2
- Pytorch 2.6.0+cu124
- Datasets 2.18.0
- Tokenizers 0.15.2 |
vermoney/dfcf05b9-a6ee-4501-bd44-45b49bb8ef6b | vermoney | 2025-05-04T11:14:18Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B",
"license:llama3.1",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-04T10:55:57Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dfcf05b9-a6ee-4501-bd44-45b49bb8ef6b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e09559fcf6f0ac01_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e09559fcf6f0ac01_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vermoney/dfcf05b9-a6ee-4501-bd44-45b49bb8ef6b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/e09559fcf6f0ac01_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cb90103a-f63e-46ef-aa4d-918767b8bb09
wandb_project: s56-9
wandb_run: your_name
wandb_runid: cb90103a-f63e-46ef-aa4d-918767b8bb09
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# dfcf05b9-a6ee-4501-bd44-45b49bb8ef6b
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.762 | 0.0083 | 200 | 1.4838 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
marialvsantiago/f9e59804-2d0e-484f-b19b-59717ee1db58 | marialvsantiago | 2025-05-04T11:13:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B",
"license:llama3.1",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-04T10:55:57Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f9e59804-2d0e-484f-b19b-59717ee1db58
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e09559fcf6f0ac01_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e09559fcf6f0ac01_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: marialvsantiago/f9e59804-2d0e-484f-b19b-59717ee1db58
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/e09559fcf6f0ac01_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cb90103a-f63e-46ef-aa4d-918767b8bb09
wandb_project: s56-33
wandb_run: your_name
wandb_runid: cb90103a-f63e-46ef-aa4d-918767b8bb09
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f9e59804-2d0e-484f-b19b-59717ee1db58
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7625 | 0.0083 | 200 | 1.4833 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ma921/gpt2-large_dr_dpo_golden-hh_noise40_epoch3 | ma921 | 2025-05-04T11:12:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:ma921/gpt2-large-sft-golden-hh",
"base_model:finetune:ma921/gpt2-large-sft-golden-hh",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T11:11:32Z | ---
library_name: transformers
license: mit
base_model: ma921/gpt2-large-sft-golden-hh
tags:
- generated_from_trainer
model-index:
- name: gpt2-large_dr_dpo_golden-hh_noise40_epoch3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large_dr_dpo_golden-hh_noise40_epoch3
This model is a fine-tuned version of [ma921/gpt2-large-sft-golden-hh](https://huggingface.co/ma921/gpt2-large-sft-golden-hh) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
JaesungHuh/voice-gender-classifier | JaesungHuh | 2025-05-04T11:09:00Z | 11,759 | 15 | transformers | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"gender-classification",
"VoxCeleb",
"audio-classification",
"dataset:ProgramComputer/voxceleb",
"arxiv:2005.07143",
"license:mit",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-05-13T20:37:39Z | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
- gender-classification
- VoxCeleb
license: mit
datasets:
- ProgramComputer/voxceleb
pipeline_tag: audio-classification
---
# Voice gender classifier
- This repo contains the inference code to use pretrained human voice gender classifier.
- You could also try 🤗[Huggingface online demo](https://huggingface.co/spaces/JaesungHuh/voice-gender-classifier).
## Installation
First, clone the original [github repository](https://github.com/JaesungHuh/voice-gender-classifier)
```
git clone https://github.com/JaesungHuh/voice-gender-classifier.git
```
and install the packages via pip.
```
cd voice-gender-classifier
pip install -r requirements.txt
```
## Usage
```
import torch
from model import ECAPA_gender
# You could directly download the model from the huggingface model hub
model = ECAPA_gender.from_pretrained("JaesungHuh/voice-gender-classifier")
model.eval()
# If you are using gpu ....
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
# Load the audio file and use predict function to directly get the output
example_file = "data/00001.wav"
with torch.no_grad():
output = model.predict(example_file, device=device)
print("Gender : ", output)
```
## Pretrained weights
For those who need pretrained weights, please download it in [here](https://drive.google.com/file/d/1ojtaa6VyUhEM49F7uEyvsLSVN3T8bbPI/view?usp=sharing)
## Training details
State-of-the-art speaker verification model already produces good representation of the speaker's gender.
I used the pretrained ECAPA-TDNN from [TaoRuijie's](https://github.com/TaoRuijie/ECAPA-TDNN) repository, added one linear layer to make two-class classifier, and finetuned the model with the VoxCeleb2 dev set.
The model achieved **98.7%** accuracy on the VoxCeleb1 identification test split.
## Caveat
I would like to note the training dataset I've used for this model (VoxCeleb) may not represent the global human population. Please be careful of unintended biases when using this model.
## Reference
- [Original github repository](https://github.com/JaesungHuh/voice-gender-classifier)
- I modified the model architecture from [TaoRuijie's](https://github.com/TaoRuijie/ECAPA-TDNN) repository.
- For more details about ECAPA-TDNN, check the [paper](https://arxiv.org/abs/2005.07143). |
LeroyDyer/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1 | LeroyDyer | 2025-05-04T11:02:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T18:50:29Z | ---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LeroyDyer
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
maybleMyers/framepack_h1111 | maybleMyers | 2025-05-04T11:01:46Z | 0 | 1 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-04-22T04:58:28Z | ---
license: apache-2.0
---
pytorch_model.pt is official HunyuanVideo VAE
model.safetensors is from lllyasviel/flux_redux_bfl for Image Encoder (SigLIP) Model
clip_l.safetensors is Text Encoder 2 (CLIP) Model
llava_llama3_fp16.safetensors is Text Encoder 1 (LLaMA) Model
FramePackI2V_HY_bf16.safetensors is DiT Model
FramePack_F1_I2V_HY_20250503.safetensors is the F1 DiT Model
|
fats-fme/fe324b86-e9de-4b05-a95a-8fa55f2d7638 | fats-fme | 2025-05-04T11:00:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-05-04T10:26:40Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fe324b86-e9de-4b05-a95a-8fa55f2d7638
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6b473e47395e4472_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6b473e47395e4472_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/fe324b86-e9de-4b05-a95a-8fa55f2d7638
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 130GB
max_steps: 50
micro_batch_size: 1
mlflow_experiment_name: /tmp/6b473e47395e4472_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eb4497fe-04bc-4cc5-9104-87e75a418525
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: eb4497fe-04bc-4cc5-9104-87e75a418525
warmup_steps: 200
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# fe324b86-e9de-4b05-a95a-8fa55f2d7638
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.0862 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tanay4587/ml | tanay4587 | 2025-05-04T10:59:04Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-04T10:59:04Z | ---
license: creativeml-openrail-m
---
|
ma921/gpt2-large_dpo_oasst1_noise40_epoch3 | ma921 | 2025-05-04T10:57:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:ma921/gpt2-large-sft-golden-hh",
"base_model:finetune:ma921/gpt2-large-sft-golden-hh",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T10:56:31Z | ---
library_name: transformers
license: mit
base_model: ma921/gpt2-large-sft-golden-hh
tags:
- generated_from_trainer
model-index:
- name: gpt2-large_dpo_oasst1_noise40_epoch3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large_dpo_oasst1_noise40_epoch3
This model is a fine-tuned version of [ma921/gpt2-large-sft-golden-hh](https://huggingface.co/ma921/gpt2-large-sft-golden-hh) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
WizzyRocky/ppo-LunarLander-v2 | WizzyRocky | 2025-05-04T10:56:55Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-04T10:56:37Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 275.71 +/- 21.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vitrium-labs/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-energetic_pale_cheetah | vitrium-labs | 2025-05-04T10:55:28Z | 0 | 2 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am energetic pale cheetah",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T12:20:03Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-energetic_pale_cheetah
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am energetic pale cheetah
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-energetic_pale_cheetah
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vitrium-labs/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-energetic_pale_cheetah", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Stain007/Stain | Stain007 | 2025-05-04T10:52:32Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2025-05-04T10:51:32Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dsfsi/mistral-7b-custom_prompt_long_short_2000 | dsfsi | 2025-05-04T10:49:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T10:49:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rosalinec/Reinforce-Pixelcopter-PLE-v0 | rosalinec | 2025-05-04T10:49:04Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-04T10:48:59Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 37.60 +/- 42.26
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
qurk41/mistral-small-3.1-24b-instruct-2503-jackterated-hf-mlx-4Bit | qurk41 | 2025-05-04T10:48:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mlx",
"conversational",
"base_model:JackCloudman/mistral-small-3.1-24b-instruct-2503-jackterated-hf",
"base_model:quantized:JackCloudman/mistral-small-3.1-24b-instruct-2503-jackterated-hf",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-05-04T10:47:43Z | ---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
base_model: JackCloudman/mistral-small-3.1-24b-instruct-2503-jackterated-hf
tags:
- mlx
---
# qurk41/mistral-small-3.1-24b-instruct-2503-jackterated-hf-mlx-4Bit
The Model [qurk41/mistral-small-3.1-24b-instruct-2503-jackterated-hf-mlx-4Bit](https://huggingface.co/qurk41/mistral-small-3.1-24b-instruct-2503-jackterated-hf-mlx-4Bit) was converted to MLX format from [JackCloudman/mistral-small-3.1-24b-instruct-2503-jackterated-hf](https://huggingface.co/JackCloudman/mistral-small-3.1-24b-instruct-2503-jackterated-hf) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("qurk41/mistral-small-3.1-24b-instruct-2503-jackterated-hf-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
WinfredGe/T2S | WinfredGe | 2025-05-04T10:48:02Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-04T10:48:02Z | ---
license: apache-2.0
---
|
dsfsi/mistral-7b-custom_prompt_few_short_2000 | dsfsi | 2025-05-04T10:47:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T10:47:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
guolanhai889/lovestory | guolanhai889 | 2025-05-04T10:45:50Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | 2025-05-04T10:45:49Z | ---
license: artistic-2.0
---
|
cmykk/gemma2-2b-fips | cmykk | 2025-05-04T10:45:26Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"SMModelForCausalLM",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-05-04T10:11:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fineinstructions/template_instantiator_adapter | fineinstructions | 2025-05-04T10:45:03Z | 25 | 0 | peft | [
"peft",
"safetensors",
"datadreamer",
"datadreamer-0.46.0",
"synthetic",
"text-generation",
"conversational",
"dataset:fineinstructions/template_instantiator_training_test",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] | text-generation | 2025-04-21T16:36:15Z | ---
base_model: meta-llama/Llama-3.2-1B-Instruct
datasets:
- fineinstructions/template_instantiator_training_test
tags:
- datadreamer
- datadreamer-0.46.0
- synthetic
- text-generation
library_name: peft
pipeline_tag: text-generation
widget:
- text: "<|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December\
\ 2023\nToday Date: 21 Apr 2025\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\
\n{\n \"instruction_template\": \"How should we go about <fi>a few word description\
\ of the desirable outcome</fi> the <fi>a few word description of the undesirable\
\ situation</fi>? While I think it is important we research ways we can <fi>protect\
\ ourselves from the undesirable situation</fi>, I think it is equally important\
\ that we look at some ideas on how we can actually <fi>address the undesirable\
\ situation</fi> <fi>entities or organizations</fi> like <fi>them</fi> from <fi>their\
\ actions</fi> on <fi>people or groups</fi>. I have a few ideas of my own, but\
\ I want to see what other people think is the easiest, most reasonable way to\
\ <fi>achieve the desirable outcome</fi> or at the very least <fi>minimize the\
\ undesirable situation</fi>.\",\n \"document\": \"South Asia Pure Water Initiative,\
\ Inc. (SAPWII) supports two small factories in Kolar and Mysore,Karnataka South\
\ India to manufacture BioSand Water Filters. For the past 10 years, we have developed\
\ programs such as our \\u201cAdopt-A-Village Partnership\\u201d and \\u201cErnie\\\
u2019s Filters for Schools\\u201d that have placed more than 12,000 filters in\
\ villages and schools in South India. We have brought clean water to more than\
\ 200,000 people suffering from diseases caused by contaminated water!\\nWith\
\ the help and support from the Centre for Affordable Water and Sanitation Technologies\
\ (CAWST), the premier BioSand filter experts worldwide, we have conducted training\
\ camps in various locations in India to spread the word of the BioSand Water\
\ Filter technology to all of India. We are training other organizations to manufacture\
\ and distribute BioSand Water Filters and provide clean water to all locations\
\ in India where there is a need.\\nOver 500,000 children die every year from\
\ diarrhea caused by unsafe water and poor sanitation \\u2013 that\\u2019s more\
\ than 1,400 a day. Achieving universal access to safe water would save 2.5 million\
\ lives every year. For every $1 invested in water and sanitation, an average\
\ of $4 is returned in increased productivity and reduced medical costs. Access\
\ to safe water breaks the cycle of poverty, creates markets where they never\
\ existed before and uplifts the global community as well as the local community.\\\
nA BioSand water filter is an adaptation of the traditional slow sand filter which\
\ has been used for community drinking water treatment for 200 years. The technology\
\ has been adapted to create a household water treatment filter that can be built\
\ on a small scale at low cost with materials available locally. The BioSand water\
\ filter has no replacement parts, requires no electricity, lasts for 30 years\
\ without ongoing costs and is virtually maintenance free. Found to be very effective\
\ for reducing water-borne disease and manufactured and used in more than 60 countries\
\ worldwide.\"\n}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
example_title: Example 1
- text: "<|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December\
\ 2023\nToday Date: 21 Apr 2025\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\
\n{\n \"instruction_template\": \"Can we please use this opportunity to <fi>a\
\ few word description of a desirable change</fi> and focus more on <fi>a few\
\ word description of a desirable state</fi>? <fi>Examples of current situations\
\ or locations where the desirable change is happening</fi> are <fi>a few word\
\ description of a desirable state</fi> right now. <fi>Examples of locations or\
\ situations where the desirable change is happening</fi> have <fi>notable examples\
\ of the desirable change</fi>. The <fi>a few word description of a system or\
\ environment</fi> is <fi>a few word description of a desirable state</fi>, and\
\ this all happened in <fi>a short amount of time</fi>. Imagine all the <fi>positive\
\ outcomes</fi> that could happen if we learned to <fi>coexist with nature</fi>\
\ and <fi>made improvements</fi>. This is a real opportunity for us all to make\
\ a <fi>positive change</fi>.\",\n \"document\": \"South Asia Pure Water Initiative,\
\ Inc. (SAPWII) supports two small factories in Kolar and Mysore,Karnataka South\
\ India to manufacture BioSand Water Filters. For the past 10 years, we have developed\
\ programs such as our \\u201cAdopt-A-Village Partnership\\u201d and \\u201cErnie\\\
u2019s Filters for Schools\\u201d that have placed more than 12,000 filters in\
\ villages and schools in South India. We have brought clean water to more than\
\ 200,000 people suffering from diseases caused by contaminated water!\\nWith\
\ the help and support from the Centre for Affordable Water and Sanitation Technologies\
\ (CAWST), the premier BioSand filter experts worldwide, we have conducted training\
\ camps in various locations in India to spread the word of the BioSand Water\
\ Filter technology to all of India. We are training other organizations to manufacture\
\ and distribute BioSand Water Filters and provide clean water to all locations\
\ in India where there is a need.\\nOver 500,000 children die every year from\
\ diarrhea caused by unsafe water and poor sanitation \\u2013 that\\u2019s more\
\ than 1,400 a day. Achieving universal access to safe water would save 2.5 million\
\ lives every year. For every $1 invested in water and sanitation, an average\
\ of $4 is returned in increased productivity and reduced medical costs. Access\
\ to safe water breaks the cycle of poverty, creates markets where they never\
\ existed before and uplifts the global community as well as the local community.\\\
nA BioSand water filter is an adaptation of the traditional slow sand filter which\
\ has been used for community drinking water treatment for 200 years. The technology\
\ has been adapted to create a household water treatment filter that can be built\
\ on a small scale at low cost with materials available locally. The BioSand water\
\ filter has no replacement parts, requires no electricity, lasts for 30 years\
\ without ongoing costs and is virtually maintenance free. Found to be very effective\
\ for reducing water-borne disease and manufactured and used in more than 60 countries\
\ worldwide.\"\n}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
example_title: Example 2
- text: "<|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December\
\ 2023\nToday Date: 21 Apr 2025\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\
\n{\n \"instruction_template\": \"what are <fi>a type of item, tool, or technology</fi>\
\ used for?\",\n \"document\": \"South Asia Pure Water Initiative, Inc. (SAPWII)\
\ supports two small factories in Kolar and Mysore,Karnataka South India to manufacture\
\ BioSand Water Filters. For the past 10 years, we have developed programs such\
\ as our \\u201cAdopt-A-Village Partnership\\u201d and \\u201cErnie\\u2019s Filters\
\ for Schools\\u201d that have placed more than 12,000 filters in villages and\
\ schools in South India. We have brought clean water to more than 200,000 people\
\ suffering from diseases caused by contaminated water!\\nWith the help and support\
\ from the Centre for Affordable Water and Sanitation Technologies (CAWST), the\
\ premier BioSand filter experts worldwide, we have conducted training camps in\
\ various locations in India to spread the word of the BioSand Water Filter technology\
\ to all of India. We are training other organizations to manufacture and distribute\
\ BioSand Water Filters and provide clean water to all locations in India where\
\ there is a need.\\nOver 500,000 children die every year from diarrhea caused\
\ by unsafe water and poor sanitation \\u2013 that\\u2019s more than 1,400 a day.\
\ Achieving universal access to safe water would save 2.5 million lives every\
\ year. For every $1 invested in water and sanitation, an average of $4 is returned\
\ in increased productivity and reduced medical costs. Access to safe water breaks\
\ the cycle of poverty, creates markets where they never existed before and uplifts\
\ the global community as well as the local community.\\nA BioSand water filter\
\ is an adaptation of the traditional slow sand filter which has been used for\
\ community drinking water treatment for 200 years. The technology has been adapted\
\ to create a household water treatment filter that can be built on a small scale\
\ at low cost with materials available locally. The BioSand water filter has no\
\ replacement parts, requires no electricity, lasts for 30 years without ongoing\
\ costs and is virtually maintenance free. Found to be very effective for reducing\
\ water-borne disease and manufactured and used in more than 60 countries worldwide.\"\
\n}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
example_title: Example 3
---
# Model Card
[Add more information here](https://huggingface.co/templates/model-card-example)
## Example Usage
```python3
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, Conversation
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained('fineinstructions/template_instantiator_adapter', revision=None) # Load tokenizer
tokenizer.padding_side = 'left'
base_model = AutoModelForCausalLM.from_pretrained('meta-llama/Llama-3.2-1B-Instruct', revision=None) # Load base model
model = PeftModel.from_pretrained(base_model, model_id='fineinstructions/template_instantiator_adapter', revision=None) # Apply adapter
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, pad_token_id=tokenizer.pad_token_id, return_full_text=False)
inputs = ['{\n "instruction_template": "How should we go about <fi>a few word description of the desirable outcome</fi> the <fi>a few word description of the undesirable situation</fi>? While I think it is important we research ways we can <fi>protect ourselves from the undesirable situation</fi>, I think it is equally important that we look at some ideas on how we can actually <fi>address the undesirable situation</fi> <fi>entities or organizations</fi> like <fi>them</fi> from <fi>their actions</fi> on <fi>people or groups</fi>. I have a few ideas of my own, but I want to see what other people think is the easiest, most reasonable way to <fi>achieve the desirable outcome</fi> or at the very least <fi>minimize the undesirable situation</fi>.",\n "document": "South Asia Pure Water Initiative, Inc. (SAPWII) supports two small factories in Kolar and Mysore,Karnataka South India to manufacture BioSand Water Filters. For the past 10 years, we have developed programs such as our \\u201cAdopt-A-Village Partnership\\u201d and \\u201cErnie\\u2019s Filters for Schools\\u201d that have placed more than 12,000 filters in villages and schools in South India. We have brought clean water to more than 200,000 people suffering from diseases caused by contaminated water!\\nWith the help and support from the Centre for Affordable Water and Sanitation Technologies (CAWST), the premier BioSand filter experts worldwide, we have conducted training camps in various locations in India to spread the word of the BioSand Water Filter technology to all of India. We are training other organizations to manufacture and distribute BioSand Water Filters and provide clean water to all locations in India where there is a need.\\nOver 500,000 children die every year from diarrhea caused by unsafe water and poor sanitation \\u2013 that\\u2019s more than 1,400 a day. Achieving universal access to safe water would save 2.5 million lives every year. For every $1 invested in water and sanitation, an average of $4 is returned in increased productivity and reduced medical costs. Access to safe water breaks the cycle of poverty, creates markets where they never existed before and uplifts the global community as well as the local community.\\nA BioSand water filter is an adaptation of the traditional slow sand filter which has been used for community drinking water treatment for 200 years. The technology has been adapted to create a household water treatment filter that can be built on a small scale at low cost with materials available locally. The BioSand water filter has no replacement parts, requires no electricity, lasts for 30 years without ongoing costs and is virtually maintenance free. Found to be very effective for reducing water-borne disease and manufactured and used in more than 60 countries worldwide."\n}']
prompts = [tokenizer.apply_chat_template([{'role': 'user', 'content': i}], tokenize=False, add_generation_prompt=True) for i in inputs]
print(pipe(prompts, max_length=131072, do_sample=False))
```
---
This model was trained with a synthetic dataset with [DataDreamer 🤖💤](https://datadreamer.dev). The synthetic dataset card and model card can be found [here](datadreamer.json). The training arguments can be found [here](training_args.json). |
fineinstructions/template_instantiator | fineinstructions | 2025-05-04T10:44:41Z | 13 | 0 | null | [
"safetensors",
"llama",
"datadreamer",
"datadreamer-0.46.0",
"synthetic",
"text-generation",
"conversational",
"dataset:fineinstructions/template_instantiator_training",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] | text-generation | 2025-04-21T16:34:38Z | ---
base_model: meta-llama/Llama-3.2-1B-Instruct
datasets:
- fineinstructions/template_instantiator_training
tags:
- datadreamer
- datadreamer-0.46.0
- synthetic
- text-generation
pipeline_tag: text-generation
---
This model will take a instruction template in the format of [FineTemplates](https://huggingface.co/datasets/fineinstructions/finetemplates) and a document and return an instantiated instruction and answer pair.
The output will be a JSON object.
## Simple Usage Example
```python
import json
import re
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
# Helper to expand excerpts in the answer
def expand(document, text):
excerpt_pattern = r"<excerpt>(.*?)<\.\.\.>(.*?)</excerpt>"
matches = re.findall(excerpt_pattern, text, flags=re.DOTALL)
replacements = {}
for prefix, suffix in matches:
match = re.search(
re.escape(prefix) + r" (.*?) " + re.escape(suffix),
document,
flags=re.DOTALL,
)
try:
if match:
replacements[f"<excerpt>{prefix}<...>{suffix}</excerpt>"] = match.group(
0
)
else:
return None
except Exception:
return None
for old, new in replacements.items():
text = text.replace(old, new)
return text
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('fineinstructions/template_instantiator', revision=None)
tokenizer.padding_side = 'left'
model = AutoModelForCausalLM.from_pretrained('fineinstructions/template_instantiator', revision=None)
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, pad_token_id=tokenizer.pad_token_id, return_full_text=False)
# Run inference to instantiate the instruction template and generate an answer
inputs = [json.dumps({
"instruction_template": "...",
"document": "..."
}, indent=2)]
prompts = [tokenizer.apply_chat_template([{'role': 'user', 'content': i}], tokenize=False, add_generation_prompt=True) for i in inputs]
generations = pipe(prompts, max_length=131072, truncation=True, temperature=None, top_p=None, do_sample=False)
output = generations[0][0]['generated_text']
output_json = json.loads()
# Expand the answer
output_json["answer"] = expand(document=inputs[0]["document"], text=output_json["answer"])
# Print the output JSON
print(output_json)
##### Output JSON:
# {
# ..
# }
#
```
---
This model was trained with a synthetic dataset with [DataDreamer 🤖💤](https://datadreamer.dev). The synthetic dataset card and model card can be found [here](datadreamer.json). The training arguments can be found [here](training_args.json). |
kokovova/71515df9-4b3f-4174-8aa2-359af0804689 | kokovova | 2025-05-04T10:44:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Capybara-7B-V1.9",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1.9",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-04T10:23:13Z | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Capybara-7B-V1.9
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 71515df9-4b3f-4174-8aa2-359af0804689
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Capybara-7B-V1.9
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 2300620033aab66e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2300620033aab66e_train_data.json
type:
field_input: imgnet21k_path
field_instruction: wordnet_cat
field_output: caption
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/71515df9-4b3f-4174-8aa2-359af0804689
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/2300620033aab66e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5432948c-ce3e-46c0-b9f0-42b64b07f7bb
wandb_project: s56-4
wandb_run: your_name
wandb_runid: 5432948c-ce3e-46c0-b9f0-42b64b07f7bb
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 71515df9-4b3f-4174-8aa2-359af0804689
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2541 | 0.0036 | 200 | 1.8442 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
easygoing0114/flan-t5-xxl-fused | easygoing0114 | 2025-05-04T10:36:30Z | 288 | 11 | null | [
"gguf",
"T5xxl",
"Google FLAN",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-05T10:12:11Z |
---
license: apache-2.0
tags:
- T5xxl
- Google FLAN
---
# FLAN-T5-XXL Fused Model
## Guide (External Site): [English](https://www.ai-image-journey.com/2025/03/flan-t5xxl-te-only.html) | [Japanese](https://note.com/ai_image_journey/n/ncc6b1c475d8f)
This repository hosts a fused version of the FLAN-T5-XXL model, created by combining the split files from [Google's FLAN-T5-XXL repository](https://huggingface.co/google/flan-t5-xxl). The files have been merged for convenience, making it easier to integrate into AI applications, including image generation workflows.
<div style="display: flex; justify-content: center; align-items: center; gap: 2em;">
<div>
<img src="./images/flan_t5_xxl_TE-only_FP32_sample1.png" alt="FLAN-T5-XXL sample image 1" width="400px" height="400px">
</div>
<div>
<img src="./images/flan_t5_xxl_TE-only_FP32_sample2.png" alt="FLAN-T5-XXL sample image 2" width="400px" height="400px">
</div>
</div>
Base Model: [**blue_pencil-flux1_v0.0.1**](https://huggingface.co/bluepen5805/blue_pencil-flux1)
## Key Features
- **Fused for Simplicity:** Combines split model files into a single, ready-to-use format.
- **Optimized Variants:** Available in FP32, FP16, FP8, and quantized GGUF formats to balance accuracy and resource usage.
- **Enhanced Prompt Accuracy:** Outperforms the standard T5-XXL v1.1 in generating precise outputs for image generation tasks.
## Model Variants
| Model | Size | SSIM Similarity | Recommended |
|-------|:------:|:---------------:|:-----------:|
| FP32 | 19 GB | 100.0% | 🔺 |
| FP16 | 9.6 GB | 98.0% | ✅ |
| FP8 | 4.8 GB | 95.3% | 🔺 |
| Q8_0 | 6 GB | 97.6% | ✅ |
| Q6_K | 4.9 GB | 97.3% | 🔺 |
| Q5_K_M| 4.3 GB | 94.8% | |
| Q4_K_M| 3.7 GB | 96.4% | |
### Comparison Graph
<div style="text-align: center; margin-left: auto; margin-right: auto; width: 600px; max-width: 80%;">
<img src="./images/Flan-T5xxl_TE-only_MAE_SSIM_Similarity.png" alt="FLAN-T5-XXL MAE and SSIM Similarity Graph">
</div>
For a detailed comparison, refer to [this blog post](https://www.ai-image-journey.com/2024/12/image-difference-t5xxl-clip-l.html).
## Usage Instructions
Place the downloaded model files in one of the following directories:
- `installation_folder/models/text_encoder`
- `installation_folder/models/clip`
- `installation_folder/Models/CLIP`
### ComfyUI
When using Flux.1 in ComfyUI, load the text encoder with the **DualCLIPLoader** node.
<div style="text-align: center; margin-left: auto; margin-right: auto; width: 400px; max-width: 80%;">
<img src="./images/screenshot of ComfyUI DualCLIPLoader node.png" alt="Screenshot of ComfyUI DualCLIPLoader node">
</div>
As of **April 13, 2025**, the default DualCLIPLoader node includes a device selection option, allowing you to choose where to load the model:
- `cuda` → VRAM
- `cpu` → System RAM
Since Flux.1’s text encoder is large, setting the device to `cpu` and storing the model in system RAM often improves performance. Unless your system RAM is 16GB or less, keeping the model in system RAM is more effective than GGUF quantization. Thus, GGUF formats offer limited benefits in ComfyUI for most users due to sufficient RAM availability.
([More about ComfyUI settings](https://www.ai-image-journey.com/2025/03/comfyui-setting.html).)
You can also use FP32 text encoders for optimal results by enabling the `--fp32-text-enc` argument at startup.
### Stable Diffusion WebUI Forge
In Stable Diffusion WebUI Forge, select the FLAN-T5-XXL model instead of the default T5xxl_v1_1 text encoder.
<div style="text-align: center; margin-left: auto; margin-right: auto; width: 800px; max-width: 80%;">
<img src="./images/Screenshot of Stable Diffusion WebUI Forge text encoder selection screen.png" alt="Stable Diffusion WebUI Forge Text Encoder Selection Screen">
</div>
To use the text encoder in FP32 format, launch Stable Diffusion WebUI Forge with the `--clip-in-fp32` argument.
## Comparison: FLAN-T5-XXL vs T5-XXL v1.1
<div style="display: flex; justify-content: center; align-items: center; gap: 2em;">
<div>
<img src="./images/flan_t5_xxl_image.png" alt="FLAN-T5-XXL Image" width="400px" height="400px">
</div>
<div>
<img src="./images/t5_xxl_v1_1_image.png" alt="T5-XXL v1.1 Image" width="400px" height="400px">
</div>
</div>
These example images were generated using **FLAN-T5-XXL** and [**T5-XXL v1.1**](https://huggingface.co/google/t5-v1_1-xxl) models in Flux.1. FLAN-T5-XXL delivers more accurate responses to prompts.
## Further Comparisons
- [FLAN-T5-XXL vs T5-XXL v1.1](https://www.ai-image-journey.com/2024/12/clip-t5xxl-text-encoder.html)
- [FLAN-T5-XXL FP32 vs FP16 and Quantization](https://www.ai-image-journey.com/2024/12/image-difference-t5xxl-clip-l.html)
---
## License
- This model is distributed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
- The uploader claims no ownership or rights over the model.
---
## Update History
### April 20, 2025
Updated Stable Diffusion WebUI Forge FP32 launch argument.
### April 15, 2025
Updated content to reflect ComfyUI updates.
### March 20, 2025
Updated FLAN-T5-XXL model list and table. |
deswaq/iuh9 | deswaq | 2025-05-04T10:32:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T10:19:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gf43hhd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe | gf43hhd | 2025-05-04T10:30:26Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am armored zealous giraffe",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-19T21:04:10Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am armored zealous giraffe
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gf43hhd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jgchaparro/language_garden-tsd-ell-8B-GGUF | jgchaparro | 2025-05-04T10:28:27Z | 44 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-14T09:27:22Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jgchaparro
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jgchaparro/language_garden-ell-tsd-8B-gguf | jgchaparro | 2025-05-04T10:25:51Z | 54 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-26T15:15:50Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
- text-generation
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jgchaparro
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
ivangrapher/80608c4e-ff42-4c7c-831c-da4b82726652 | ivangrapher | 2025-05-04T10:24:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-04T09:03:04Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 80608c4e-ff42-4c7c-831c-da4b82726652
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 6b473e47395e4472_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6b473e47395e4472_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: ivangrapher/80608c4e-ff42-4c7c-831c-da4b82726652
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/6b473e47395e4472_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eb4497fe-04bc-4cc5-9104-87e75a418525
wandb_project: s56-7
wandb_run: your_name
wandb_runid: eb4497fe-04bc-4cc5-9104-87e75a418525
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 80608c4e-ff42-4c7c-831c-da4b82726652
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0553 | 0.0046 | 150 | 2.0490 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rlawltjd/code-llama3-7B-text-to-bash-v2 | rlawltjd | 2025-05-04T10:21:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-04T10:19:53Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PingVortex/VLM-1 | PingVortex | 2025-05-04T10:20:36Z | 7 | 0 | null | [
"safetensors",
"gpt2",
"text-generation",
"dataset:tatsu-lab/alpaca",
"license:mit",
"region:us"
] | text-generation | 2025-05-03T15:50:10Z | ---
license: mit
pipeline_tag: text-generation
datasets:
- tatsu-lab/alpaca
---
# VLM 1
- First model of VLM series (**V**ortex **L**anguage **M**odel)
## Talk with the model:
- Open [Google Colab](https://colab.research.google.com/)
- Create new notebook
- Paste this code in the cell:
```python
!pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "PingVortex/VLM-1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
print("VLM 1 Chat\nType 'exit' to quit")
while True:
user_input = input("You: ")
if user_input.strip().lower() == "exit":
break
input_ids = tokenizer(user_input, return_tensors="pt").input_ids
input_ids = input_ids[:, -1024:]
with torch.no_grad():
output = model.generate(
input_ids,
max_new_tokens=50,
do_sample=True,
temperature=0.7,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id
)
new_tokens = output[0][input_ids.shape[1]:]
response = tokenizer.decode(new_tokens, skip_special_tokens=True)
print("VLM:", response.strip())
``` |
DuongTrongChi/vinallama-dpo-old | DuongTrongChi | 2025-05-04T10:19:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-04T10:17:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sky-2002/Marathi-SmolLM2-145M-Finetuned-4 | sky-2002 | 2025-05-04T10:19:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:ai4bharat/IndicParaphrase",
"dataset:ai4bharat/IndicQuestionGeneration",
"dataset:ai4bharat/IndicHeadlineGeneration",
"dataset:ai4bharat/IndicSentenceSummarization",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T09:59:01Z | ---
library_name: transformers
tags: []
datasets:
- ai4bharat/IndicParaphrase
- ai4bharat/IndicQuestionGeneration
- ai4bharat/IndicHeadlineGeneration
- ai4bharat/IndicSentenceSummarization
---
# Model Card
## Model Details
Instruction-tuned version of the [Marathi-SmolLM2-145M](https://huggingface.co/sky-2002/Marathi-SmolLM2-145M) on the following tasks:
- **Paraphrasing**: Using [`ai4bharat/IndicParaphrase`](https://huggingface.co/datasets/ai4bharat/IndicParaphrase) dataset.
- **Question Generation**: Using [`ai4bharat/IndicQuestionGeneration`](https://huggingface.co/datasets/ai4bharat/IndicQuestionGeneration) dataset.
- **Headline Generation**: Using [`ai4bharat/IndicHeadlineGeneration`](https://huggingface.co/datasets/ai4bharat/IndicHeadlineGeneration) dataset.
- **Sentence Summarization**: Using [`ai4bharat/IndicSentenceSummarization`](https://huggingface.co/datasets/ai4bharat/IndicSentenceSummarization) dataset.
**Note**
- This is a experimental instruction-tuned model (on 4 tasks).
- Initial experiments suggest that this model is not consistent in generating expected outputs everytime and
thus needs more tuning.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("sky-2002/Marathi-SmolLM2-145M-Finetuned-4")
model = AutoModelForCausalLM.from_pretrained("sky-2002/Marathi-SmolLM2-145M-Finetuned-4")
def task_generate(
input: str,
max_new_tokens: int = 100,
min_new_tokens: int = 1,
temperature: float = 0.7,
top_p: float = 0.95,
task="paraphrase",
use_beam_search: bool = False,
**kwargs,
) -> str:
if task == "paraphrase":
messages = [
{"role": "system", "content": "तुम्ही एक उपयुक्त मराठी सहाय्यक आहात."},
{"role": "user", "content": f"खालील वाक्य दुसऱ्या, पण समान अर्थ असणाऱ्या शब्दांत पुन्हा लिहा:\n\n{input}"},
]
elif task == "headline":
messages = [
{"role": "system", "content": "तुम्ही एक उपयुक्त मराठी सहाय्यक आहात."},
{
"role": "user",
"content": (
"तुम्ही एक बातमी लेख वाचत आहात. त्यावर एक शीर्षक तयार करा.\n\n"
"लेख:\n\n"
f"{input}\n\n"
),
}
]
elif task=="question":
messages = [
{"role": "system", "content": "तुम्ही एक उपयुक्त मराठी सहाय्यक आहात."},
{
"role": "user",
"content": (
"खालील परिच्छेद वाचा आणि दिलेल्या उत्तराशी सुसंगत असा प्रश्न तयार करा:\n\n"
"परिच्छेद:\n"
f"{input}\n\n"
f"उत्तर: {kwargs['answer']}\n\n"
),
}
]
elif task=="summarize":
messages = [
{"role": "system", "content": "तुम्ही एक उपयुक्त मराठी सहाय्यक आहात."},
{
"role": "user",
"content": (
"तुम्ही दिलेल्या वाक्याचा सारांश द्या.\n\n"
"वाक्य:\n"
f"{input}\n\n"
),
}
]
else:
raise ValueError(f"Unknown task: {task}")
inputs = tokenizer.apply_chat_template(
messages,
tokenize=False,
)
inputs = tokenizer(
inputs,
padding="max_length",
truncation=True,
max_length=512,
return_tensors="pt",
)
gen_kwargs = {
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
"max_new_tokens": max_new_tokens,
"min_new_tokens": min_new_tokens,
"pad_token_id": tokenizer.eos_token_id,
"eos_token_id": tokenizer.eos_token_id,
}
if use_beam_search:
# disable sampling:
gen_kwargs.update({
"do_sample": False,
"num_beams": kwargs.get("num_beams", 4),
"early_stopping": True,
"num_return_sequences": 1,
# optional: prevent repetition
# "no_repeat_ngram_size": 2,
# optional: length penalty to favor longer/shorter
# "length_penalty": 1.0,
})
else:
# your existing sampling defaults
gen_kwargs.update({
"do_sample": True,
"temperature": temperature,
"top_p": top_p,
})
output_ids = model.generate(**gen_kwargs)[0]
decoded = tokenizer.decode(output_ids, skip_special_tokens=True)
marker = "<|assistant|>"
if marker in decoded:
generated = decoded.split(marker)[-1]
else:
generated = decoded
return generated.strip()
# Example usage
sentence = """पुणे विद्यापीठाने म्हटले आहे की, शालेय शिक्षणात सुधारणा करण्यासाठी शाळा व महाविद्यालये यांच्यातील सहकार्य आवश्यक आहे.
शाळा व महाविद्यालये यांच्यातील सहकार्यामुळे विद्यार्थ्यांना शालेय शिक्षणात सुधारणा करण्यास मदत होईल.
पुणे विद्यापीठाने शालेय शिक्षणात सुधारणा करण्यासाठी शाळा व महाविद्यालये यांच्यातील सहकार्य आवश्यक आहे."""
print(task_generate(sentence, task="headline"))
sentence = """
महाराष्ट्रातील शेतकरी परंपरागत आणि आधुनिक पद्धतींचा अवलंब करून पिक लागवड करतात.
"""
print(task_generate(sentence, task="paraphrase"))
``` |
XXXCarl/SegEarth-R1-RefSeg | XXXCarl | 2025-05-04T10:16:39Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-04T10:16:39Z | ---
license: apache-2.0
---
|
harshroxnox/Mistral-7B-Instruct-v0.3-Q5_K_M-GGUF | harshroxnox | 2025-05-04T10:12:52Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-04T10:12:31Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.3
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---
# harshroxnox/Mistral-7B-Instruct-v0.3-Q5_K_M-GGUF
This model was converted to GGUF format from [`mistralai/Mistral-7B-Instruct-v0.3`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo harshroxnox/Mistral-7B-Instruct-v0.3-Q5_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo harshroxnox/Mistral-7B-Instruct-v0.3-Q5_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo harshroxnox/Mistral-7B-Instruct-v0.3-Q5_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo harshroxnox/Mistral-7B-Instruct-v0.3-Q5_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q5_k_m.gguf -c 2048
```
|
Flo444/example-lora-realistic | Flo444 | 2025-05-04T10:11:42Z | 12 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:bigscience-openrail-m",
"region:us"
] | text-to-image | 2025-04-28T07:26:27Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
Beautiful realistic landscape, mountains, river
parameters:
negative_prompt: >-
blurry, low quality, distorted, extra limbs, cartoon, painting, anime
style, out of frame, bad proportions
output:
url: https://i.imgur.com/bDeXN8g.png
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
instance_prompt: realistic landscape, scenic, nature view
license: bigscience-openrail-m
---
# Example LoRA - Realistic Landscapes
<Gallery />
## Model description
Example LoRA - Realistic Landscapes This LoRA was trained on a small curated dataset of realistic outdoor and nature landscape images, using stable-diffusion-v1-5 as the base model.
It aims to enhance the generation of highly realistic, vivid scenic imagery with natural lighting, detailed textures, and a cinematic feel.
Training Details:
Base model: Stable Diffusion v1.5
Resolution: 512x512
Steps: 2,500
Batch size: 4
Learning rate: 1e-4
Optimizer: AdamW
LoRA Rank: 4
LoRA Alpha: 4
How to use: Load the LoRA with your preferred Stable Diffusion WebUI or Colab.
Use the trigger words: realistic landscape, scenic, nature view.
Adjust the weight between 0.6–0.9 for best results.
Trigger words
You should use realistic landscape to trigger the image generation.
You should use scenic to trigger the image generation.
You should use nature view to trigger the image generation.
Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
## Trigger words
You should use `realistic landscape` to trigger the image generation.
You should use `scenic` to trigger the image generation.
You should use `nature view` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Flo444/example-lora-realistic/tree/main) them in the Files & versions tab.
|
robinfaro/StandardMoE-1B-fineweb_edu-0BT | robinfaro | 2025-05-04T10:10:13Z | 5 | 0 | null | [
"safetensors",
"moegpt",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"custom_code",
"region:us"
] | null | 2025-04-25T08:04:28Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
robinfaro/TiMoE-1B-fineweb_edu-0BT | robinfaro | 2025-05-04T10:10:05Z | 13 | 0 | null | [
"safetensors",
"moegpt",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"custom_code",
"region:us"
] | null | 2025-04-23T09:51:30Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
jakedopal/facerec | jakedopal | 2025-05-04T10:06:57Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-04T10:06:56Z | ---
license: apache-2.0
---
|
RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf | RichardErkhov | 2025-05-04T10:05:23Z | 16 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-04T06:33:37Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style - GGUF
- Model creator: https://huggingface.co/Essacheez/
- Original model: https://huggingface.co/Essacheez/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q2_K.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q2_K.gguf) | Q2_K | 2.96GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q3_K.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q3_K.gguf) | Q3_K | 3.74GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q4_0.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q4_0.gguf) | Q4_0 | 4.34GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q4_K.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q4_K.gguf) | Q4_K | 4.58GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q4_1.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q4_1.gguf) | Q4_1 | 4.78GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q5_0.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q5_0.gguf) | Q5_0 | 5.21GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q5_K.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q5_K.gguf) | Q5_K | 5.34GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q5_1.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q5_1.gguf) | Q5_1 | 5.65GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q6_K.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q6_K.gguf) | Q6_K | 6.14GB |
| [LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q8_0.gguf](https://huggingface.co/RichardErkhov/Essacheez_-_LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style-gguf/blob/main/LLAMA3.1-8b-SafetyData-code-1.2k-safetyllamas_stanford-default-style.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
boogiey/0xmodel1080 | boogiey | 2025-05-04T10:03:58Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-04T10:03:58Z | ---
license: apache-2.0
---
|
memevis/walk22 | memevis | 2025-05-04T10:03:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T10:03:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
maksf8486/3b22ba02-7592-4787-abfc-05e4cea31d4f | maksf8486 | 2025-05-04T09:58:15Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:quantized:lmsys/vicuna-7b-v1.5",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-04T09:34:37Z | ---
base_model: lmsys/vicuna-7b-v1.5
library_name: transformers
model_name: 3b22ba02-7592-4787-abfc-05e4cea31d4f
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 3b22ba02-7592-4787-abfc-05e4cea31d4f
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="maksf8486/3b22ba02-7592-4787-abfc-05e4cea31d4f", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-2/runs/zrbe3emz)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
0xz4cking/test | 0xz4cking | 2025-05-04T09:57:25Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-04T09:57:25Z | ---
license: apache-2.0
---
|
nathanialhunt2000/48d6d41a-14bf-4395-a0d3-195f0fc0b14f | nathanialhunt2000 | 2025-05-04T09:55:46Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:ad53ac34880a775e_train_data.json",
"base_model:unsloth/SmolLM2-360M",
"base_model:adapter:unsloth/SmolLM2-360M",
"region:us"
] | null | 2025-05-04T09:55:20Z | ---
library_name: peft
tags:
- generated_from_trainer
datasets:
- ad53ac34880a775e_train_data.json
base_model: unsloth/SmolLM2-360M
model-index:
- name: nathanialhunt2000/48d6d41a-14bf-4395-a0d3-195f0fc0b14f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nathanialhunt2000/48d6d41a-14bf-4395-a0d3-195f0fc0b14f
This model was trained from scratch on the /workspace/input_data/ad53ac34880a775e_train_data.json dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
WangBiao/R1-Track-GRPO | WangBiao | 2025-05-04T09:54:33Z | 7 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"dataset:WangBiao/R1-Track-5k",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:mit",
"region:us"
] | null | 2025-04-27T15:32:24Z | ---
license: mit
datasets:
- WangBiao/R1-Track-5k
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
---
# Demo
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"WangBiao/R1-Track-GRPO", torch_dtype="auto", device_map="auto"
)
min_pixels = 336*336
max_pixels = 336*336
processor = AutoProcessor.from_pretrained("WangBiao/R1-Track-GRPO", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": [
{
"type": "image",
"image": "image_1.jpg",
},
{
"type": "image",
"image": "image_2.jpg",
},
{"type": "text", "text": "You FIRST think about the reasoning process as an internal monologue and then provide the final answer. \n The reasoning process MUST BE enclosed within <think> </think> tags. The final answer MUST BE put in <answer> </answer> tags.Please identify the target specified by the bounding box [241,66,329,154] in the first image and locate it in the second image. Return the coordinates in [x_min,y_min,x_max,y_max] format."},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=256)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
``` |
chenyu313707056/313707056-qwen | chenyu313707056 | 2025-05-04T09:51:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T06:19:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Arshii/CSIO-HindiQA-FinetunedLlama3.1Instruct-200 | Arshii | 2025-05-04T09:48:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T09:47:49Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Arshii
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fedovtt/7e34f4fe-f62b-4578-a641-890c26e4dc2e | fedovtt | 2025-05-04T09:47:59Z | 0 | 0 | peft | [
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-04T09:18:07Z | ---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7e34f4fe-f62b-4578-a641-890c26e4dc2e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: bigcode/starcoder2-3b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 1d3219f72b2f3c95_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1d3219f72b2f3c95_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: fedovtt/7e34f4fe-f62b-4578-a641-890c26e4dc2e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 3.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/1d3219f72b2f3c95_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 522073c0-1c50-4bda-be86-86bd642b495a
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 522073c0-1c50-4bda-be86-86bd642b495a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7e34f4fe-f62b-4578-a641-890c26e4dc2e
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.0562 | 0.0559 | 150 | 2.8559 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Baselhany/Graduation_Project_Distil_Whisper_base11112 | Baselhany | 2025-05-04T09:47:15Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-02T21:18:15Z | ---
library_name: transformers
language:
- ar
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper base AR - BA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper base AR - BA
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the quran-ayat-speech-to-text dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1643
- Wer: 0.9698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 162.6822 | 0.9958 | 119 | 0.5996 | 1.0000 |
| 145.6928 | 1.9958 | 238 | 0.5120 | 1.0001 |
| 122.8358 | 2.9958 | 357 | 0.4218 | 1.0006 |
| 101.0655 | 3.9958 | 476 | 0.3487 | 0.9985 |
| 81.2152 | 4.9958 | 595 | 0.2970 | 0.9981 |
| 53.6456 | 5.9958 | 714 | 0.2534 | 0.9951 |
| 44.4373 | 6.9958 | 833 | 0.2214 | 0.9962 |
| 37.4937 | 7.9958 | 952 | 0.1987 | 0.9931 |
| 32.7034 | 8.9958 | 1071 | 0.1815 | 0.9897 |
| 28.245 | 9.9958 | 1190 | 0.1643 | 0.9698 |
| 21.3637 | 10.9958 | 1309 | 0.1586 | 0.9872 |
| 19.1175 | 11.9958 | 1428 | 0.1457 | 0.9833 |
| 17.2055 | 12.9958 | 1547 | 0.1445 | 0.9736 |
| 15.7438 | 13.9958 | 1666 | 0.1419 | 0.9792 |
| 14.7306 | 14.9958 | 1785 | 0.1408 | 0.9751 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
Flo0620/Qwen2_5_7B_r256_a256_d0_1 | Flo0620 | 2025-05-04T09:46:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-23T11:43:19Z | ---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Qwen2_5_7B_r256_a256_d0_1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2_5_7B_r256_a256_d0_1
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r256_a256_d0_1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
deswaq/iuh8 | deswaq | 2025-05-04T09:46:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T09:43:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hornung/bert-finetuned-ner | hornung | 2025-05-04T09:45:21Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-01-15T21:07:56Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
fty7i/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala | fty7i | 2025-05-04T09:44:16Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pensive powerful koala",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T07:46:13Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am pensive powerful koala
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fty7i/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
kk-aivio/4fa0d18d-7513-48d8-8c6a-b956bcac9211 | kk-aivio | 2025-05-04T09:39:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:a2914c06a7126786_train_data.json",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"region:us"
] | null | 2025-05-04T09:39:27Z | ---
library_name: peft
tags:
- generated_from_trainer
datasets:
- a2914c06a7126786_train_data.json
base_model: unsloth/Llama-3.2-1B-Instruct
model-index:
- name: kk-aivio/4fa0d18d-7513-48d8-8c6a-b956bcac9211
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kk-aivio/4fa0d18d-7513-48d8-8c6a-b956bcac9211
This model was trained from scratch on the /workspace/input_data/a2914c06a7126786_train_data.json dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
wqrqwre/werrwer | wqrqwre | 2025-05-04T09:37:38Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-04T09:37:38Z | ---
license: apache-2.0
---
|
se7eneth/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_unseen_chinchilla | se7eneth | 2025-05-04T09:36:20Z | 22 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am lightfooted unseen chinchilla",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-07T17:23:35Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_unseen_chinchilla
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am lightfooted unseen chinchilla
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_unseen_chinchilla
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="se7eneth/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_unseen_chinchilla", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
cvoffer/4265e64c-e0c3-4d90-bc04-95b2f895aa01 | cvoffer | 2025-05-04T09:35:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-04T09:28:18Z | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4265e64c-e0c3-4d90-bc04-95b2f895aa01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Llama-3.2-1B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- a2914c06a7126786_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a2914c06a7126786_train_data.json
type:
field_instruction: context
field_output: outcome
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: cvoffer/4265e64c-e0c3-4d90-bc04-95b2f895aa01
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/a2914c06a7126786_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 649baec9-d960-49fd-a593-a3b8bbfbb01e
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 649baec9-d960-49fd-a593-a3b8bbfbb01e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4265e64c-e0c3-4d90-bc04-95b2f895aa01
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.7915 | 0.0910 | 150 | 4.3696 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hanaearg/emo-Llama3.18bDev15 | hanaearg | 2025-05-04T09:32:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T09:32:32Z | ---
base_model: unsloth/llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hanaearg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ElnaggarLab/ankh2-ext1 | ElnaggarLab | 2025-05-04T09:31:32Z | 40 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"biology",
"protein",
"protein language model",
"protein embedding",
"dataset:agemagician/uniref50",
"arxiv:2301.06568",
"doi:10.57967/hf/5339",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-07-07T09:32:58Z | ---
license: cc-by-nc-sa-4.0
tags:
- biology
- protein
- protein language model
- protein embedding
datasets:
- agemagician/uniref50
---
# ANKH2-extended1 model
Pretrained model on protein sequences using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/2301.06568) and first released in
[this repository](https://github.com/agemagician/Ankh). This model is trained on uppercase amino acids: it only works with capital letter amino acids.
## Model description
Ankh2-ext1 is based on the `ANKH-Large` model and was pretrained on a large corpus of protein sequences in a self-supervised fashion.
This means it was pretrained on the raw protein sequences only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those protein sequences.
Two important differences between this ANKH2-Large model and the original ANKH-Large version are:
1. The model was trained with more number of epochs.
2. The activation function changed to silu.
It has been shown that the features extracted from this self-supervised model (LM-embeddings) captured important biophysical properties governing protein shape.
shape.
This implied learning some of the grammar of the language of life realized in protein sequences.
## Intended uses & limitations
The model could be used for protein feature extraction or to be fine-tuned on downstream tasks.
We have noticed in some tasks you can gain more accuracy by fine-tuning the model using lora method rather than using it as a feature extractor.
We have also noticed that for feature extraction, its better to use the feature extracted from the encoder rather than from the decoder.
### How to use
Here is how to use this model to extract the features of a given protein sequence in PyTorch:
```python
sequence_examples = ["PRTEINO", "SEQWENCE"]
# tokenize sequences and pad up to the longest sequence in the batch
ids = tokenizer.batch_encode_plus(sequence_examples, add_special_tokens=True, padding="longest")
input_ids = torch.tensor(ids['input_ids']).to(device)
attention_mask = torch.tensor(ids['attention_mask']).to(device)
# generate embeddings
with torch.no_grad():
embedding_repr = model(input_ids=input_ids,attention_mask=attention_mask)
# extract embeddings for the first ([0,:]) sequence in the batch while removing padded & special tokens ([0,:7])
emb_0 = embedding_repr.last_hidden_state[0,:7] # shape (7 x 1536)
print(f"Shape of per-residue embedding of first sequences: {emb_0.shape}")
# do the same for the second ([1,:]) sequence in the batch while taking into account different sequence lengths ([1,:8])
emb_1 = embedding_repr.last_hidden_state[1,:8] # shape (8 x 1536)
# if you want to derive a single representation (per-protein embedding) for the whole protein
emb_0_per_protein = emb_0.mean(dim=0) # shape (1536)
print(f"Shape of per-protein embedding of first sequences: {emb_0_per_protein.shape}")
```
## Training data
The ANKH2-Large model was pretrained on [UniRef50](https://www.uniprot.org/help/uniref), a dataset consisting of 60 million protein sequences.
## Training procedure
### Preprocessing
The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 25.
The inputs of the model are then of the form:
```
Protein Sequence </s>
```
The preprocessing step was performed on the fly, by cutting and padding the protein sequences up to 512 tokens.
The details of the masking procedure for each sequence are as follows:
- 20% of the amino acids are masked.
- In 100% of the cases, the masked amino acids are replaced by `<extra_id_num>` token, where "num" is a number in range 0 and 115.
### Pretraining
The model was trained on a single TPU Pod V5-lite for 45 epochs in total, using sequence length 512 (batch size 1k).
It was trained using ANKH-Large model as an initial checkpoint, rather than training from scratch.
It has a total of approximately 2B parameters and was trained using the encoder-decoder architecture.
The optimizer used is Adafactor with linear warmup with linear decay learning rate schedule for pre-training.
## Evaluation results
When the model is used for feature extraction "FE" and parameter efficient fine-tuning "Lora", this model achieves the following results:
Test results :
| Task/Dataset | Method | secondary structure (3-states) | secondary structure (8-states) | Localization | Membrane | Solubility | Fluorescence |
|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
| CASP12 | FE | comming soon | comming soon | | | | |
| CASP12 | Lora | comming soon | comming soon | | | | |
| TS115 | FE | comming soon | comming soon | | | | |
| TS115 | Lora | comming soon | comming soon | | | | |
| CB513 | FE | comming soon | comming soon | | | | |
| CB513 | Lora | comming soon | comming soon | | | | |
| DeepLoc | FE | | | comming soon | comming soon | |
| DeepLoc | Lora | | | comming soon | comming soon | | |
| Solubility | FE | | | | | comming soon | |
| Solubility | Lora | | | | | 74% | |
| Fluorescence | FE | | | | | | Comming Soon |
| Fluorescence | Lora | | | | | | 68% |
### BibTeX entry and citation info
```bibtex
@misc{elnaggar_lab_2025,
author = { Elnaggar Lab },
title = { ankh2-ext1 (Revision 286cb6e) },
year = 2025,
url = { https://huggingface.co/ElnaggarLab/ankh2-ext1 },
doi = { 10.57967/hf/5339 },
publisher = { Hugging Face }
}
```
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) |
ElnaggarLab/ankh2-ext2 | ElnaggarLab | 2025-05-04T09:30:22Z | 407 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"biology",
"protein",
"protein language model",
"protein embedding",
"dataset:agemagician/uniref50",
"arxiv:2301.06568",
"doi:10.57967/hf/5338",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-07-07T09:45:20Z | ---
license: cc-by-nc-sa-4.0
tags:
- biology
- protein
- protein language model
- protein embedding
datasets:
- agemagician/uniref50
---
# ANKH2-extended2 model
Pretrained model on protein sequences using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/2301.06568) and first released in
[this repository](https://github.com/agemagician/Ankh). This model is trained on uppercase amino acids: it only works with capital letter amino acids.
## Model description
Ankh2-ext2 is based on the `ANKH-Large` model and was pretrained on a large corpus of protein sequences in a self-supervised fashion.
This means it was pretrained on the raw protein sequences only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those protein sequences.
Two important differences between this ANKH2-Large model and the original ANKH-Large version are:
1. The model was trained with more number of epochs.
2. The activation function changed to silu.
It has been shown that the features extracted from this self-supervised model (LM-embeddings) captured important biophysical properties governing protein shape.
shape.
This implied learning some of the grammar of the language of life realized in protein sequences.
## Intended uses & limitations
The model could be used for protein feature extraction or to be fine-tuned on downstream tasks.
We have noticed in some tasks you can gain more accuracy by fine-tuning the model using lora method rather than using it as a feature extractor.
We have also noticed that for feature extraction, its better to use the feature extracted from the encoder rather than from the decoder.
### How to use
Here is how to use this model to extract the features of a given protein sequence in PyTorch:
```python
sequence_examples = ["PRTEINO", "SEQWENCE"]
# tokenize sequences and pad up to the longest sequence in the batch
ids = tokenizer.batch_encode_plus(sequence_examples, add_special_tokens=True, padding="longest")
input_ids = torch.tensor(ids['input_ids']).to(device)
attention_mask = torch.tensor(ids['attention_mask']).to(device)
# generate embeddings
with torch.no_grad():
embedding_repr = model(input_ids=input_ids,attention_mask=attention_mask)
# extract embeddings for the first ([0,:]) sequence in the batch while removing padded & special tokens ([0,:7])
emb_0 = embedding_repr.last_hidden_state[0,:7] # shape (7 x 1536)
print(f"Shape of per-residue embedding of first sequences: {emb_0.shape}")
# do the same for the second ([1,:]) sequence in the batch while taking into account different sequence lengths ([1,:8])
emb_1 = embedding_repr.last_hidden_state[1,:8] # shape (8 x 1536)
# if you want to derive a single representation (per-protein embedding) for the whole protein
emb_0_per_protein = emb_0.mean(dim=0) # shape (1536)
print(f"Shape of per-protein embedding of first sequences: {emb_0_per_protein.shape}")
```
## Training data
The ANKH2-Large model was pretrained on [UniRef50](https://www.uniprot.org/help/uniref), a dataset consisting of 60 million protein sequences.
## Training procedure
### Preprocessing
The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 25.
The inputs of the model are then of the form:
```
Protein Sequence </s>
```
The preprocessing step was performed on the fly, by cutting and padding the protein sequences up to 512 tokens.
The details of the masking procedure for each sequence are as follows:
- 20% of the amino acids are masked.
- In 100% of the cases, the masked amino acids are replaced by `<extra_id_num>` token, where "num" is a number in range 0 and 115.
### Pretraining
The model was trained on a single TPU Pod V5-lite for 45 epochs in total, using sequence length 512 (batch size 1k).
It was trained using ANKH-Large model as an initial checkpoint, rather than training from scratch.
It has a total of approximately 2B parameters and was trained using the encoder-decoder architecture.
The optimizer used is Adafactor with linear warmup with linear decay learning rate schedule for pre-training.
## Evaluation results
When the model is used for feature extraction "FE" and parameter efficient fine-tuning "Lora", this model achieves the following results:
Test results :
| Task/Dataset | Method | secondary structure (3-states) | secondary structure (8-states) | Localization | Membrane | Solubility | Fluorescence |
|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
| CASP12 | FE | comming soon | comming soon | | | | |
| CASP12 | Lora | comming soon | comming soon | | | | |
| TS115 | FE | comming soon | comming soon | | | | |
| TS115 | Lora | comming soon | comming soon | | | | |
| CB513 | FE | comming soon | comming soon | | | | |
| CB513 | Lora | comming soon | comming soon | | | | |
| DeepLoc | FE | | | comming soon | comming soon | |
| DeepLoc | Lora | | | comming soon | comming soon | | |
| Solubility | FE | | | | | comming soon | |
| Solubility | Lora | | | | | 74% | |
| Fluorescence | FE | | | | | | Comming Soon |
| Fluorescence | Lora | | | | | | 68% |
### BibTeX entry and citation info
```bibtex
@misc{elnaggar_lab_2025,
author = { Elnaggar Lab },
title = { ankh2-ext2 (Revision 4c155ee) },
year = 2025,
url = { https://huggingface.co/ElnaggarLab/ankh2-ext2 },
doi = { 10.57967/hf/5338 },
publisher = { Hugging Face }
}
```
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) |
ashan32/ashanGPU | ashan32 | 2025-05-04T09:30:21Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-04T09:30:18Z | ---
license: apache-2.0
---
|
mugivara1/okabe | mugivara1 | 2025-05-04T09:28:02Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T09:27:17Z | ---
base_model: unsloth/Qwen3-14B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mugivara1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-14B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vermoney/4db92058-1906-493f-84d7-c47f78ad2a1d | vermoney | 2025-05-04T09:26:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-04T09:22:49Z | ---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4db92058-1906-493f-84d7-c47f78ad2a1d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigcode/starcoder2-3b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1d3219f72b2f3c95_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1d3219f72b2f3c95_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vermoney/4db92058-1906-493f-84d7-c47f78ad2a1d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/1d3219f72b2f3c95_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 522073c0-1c50-4bda-be86-86bd642b495a
wandb_project: s56-9
wandb_run: your_name
wandb_runid: 522073c0-1c50-4bda-be86-86bd642b495a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4db92058-1906-493f-84d7-c47f78ad2a1d
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.4726 | 0.0596 | 200 | 2.7968 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
leeccNLPLAB/unsloth_Meta-Llama-3.1-8B-Instruct-bnb-4bit_Med-r4 | leeccNLPLAB | 2025-05-04T09:26:19Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T09:13:30Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** leeccNLPLAB
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kokovova/0370d0bf-d392-4179-8635-8d3de781008e | kokovova | 2025-05-04T09:25:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-04T09:24:19Z | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0370d0bf-d392-4179-8635-8d3de781008e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- a2914c06a7126786_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a2914c06a7126786_train_data.json
type:
field_instruction: context
field_output: outcome
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/0370d0bf-d392-4179-8635-8d3de781008e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/a2914c06a7126786_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 649baec9-d960-49fd-a593-a3b8bbfbb01e
wandb_project: s56-4
wandb_run: your_name
wandb_runid: 649baec9-d960-49fd-a593-a3b8bbfbb01e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0370d0bf-d392-4179-8635-8d3de781008e
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8962 | 0.0971 | 200 | 2.0724 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
yunusserhat/siglip2-so400m-patch14-384-ft-tea-sickness | yunusserhat | 2025-05-04T09:24:27Z | 0 | 0 | null | [
"safetensors",
"siglip",
"dataset:yunusserhat/tea_sickness_dataset",
"base_model:google/siglip2-so400m-patch14-384",
"base_model:finetune:google/siglip2-so400m-patch14-384",
"license:apache-2.0",
"region:us"
] | null | 2025-05-04T09:07:37Z | ---
license: apache-2.0
base_model:
- google/siglip2-so400m-patch14-384
datasets:
- yunusserhat/tea_sickness_dataset
---
# SigLIP2-so400m-patch14-384-ft-tea-sickness
Bu repo, `google/siglip2-so400m-patch14-384` modeli üzerine, çay hastalıkları veri setiyle `yunusserhat/tea_sickness_dataset` fine-tune edilmiş bir görsel sınıflandırma modelini içerir.
## İçerik
Yalnızca inference (tahmin) için gerekli olan dosyalar yüklenmiştir:
- `config.json` : Model mimarisi ve ayarları
- `model.safetensors` : Eğitilmiş model ağırlıkları
- `preprocessor_config.json` : Görüntü ön işleme ayarları
## Kullanım
Modeli Hugging Face Transformers ile kolayca yükleyebilirsiniz:
```python
from datasets import load_dataset
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Model ve processor'ü yükle
model = SiglipForImageClassification.from_pretrained("yunusserhat/siglip2-so400m-patch14-384-ft-tea-sickness")
processor = AutoImageProcessor.from_pretrained("yunusserhat/siglip2-so400m-patch14-384-ft-tea-sickness")
# Test veri setini yükle
test_dataset = load_dataset("yunusserhat/tea_sickness_dataset", split="test")
# Bir örnek seç (örneğin ilk görsel)
example = test_dataset[0]
image = example["image"]
# Görüntüyü ön işle
inputs = processor(images=image, return_tensors="pt")
# Tahmin yap
with torch.no_grad():
outputs = model(**inputs)
predicted_class = outputs.logits.argmax(-1).item()
print("Gerçek etiket:", example["label"])
print("Tahmin edilen sınıf:", model.config.id2label[predicted_class]) |
Rodolfo98Mendoza/whisper-edu-v0.2 | Rodolfo98Mendoza | 2025-05-04T09:22:52Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"speech-to-text",
"english",
"lecture-transcription",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2025-05-04T06:31:37Z | ---
tags:
- automatic-speech-recognition
- speech-to-text
- whisper
- english
- lecture-transcription
license: apache-2.0
base_model: openai/whisper-base
widget:
- text: >
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition",
model="Rodolfo98Mendoza/whisper-edu-v0.2", device=0)
print(pipe("path/to/audio.wav")["text"])
metrics:
- name: WER
type: word_error_rate
value: 0.1276
dataset:
train: 70k lecture chunks
validation: 2k held-out
steps: 4k
---
# Whisper Edu v0.2
_Finetuned Whisper-base on 70 000 lecture samples to adapt to educational recordings (biology, history, art, etc.)_
…
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Final eval WER = 12.76 % (70 k train chunks, 2 k val, 4 k steps)
0%| | 0/4000 [00:00<?, ?it/s]Passing a tuple of `past_key_values` is deprecated and will be removed in Transformers v4.43.0. You should pass an instance of `EncoderDecoderCache` instead, e.g. `past_key_values=EncoderDecoderCache.from_legacy_cache(past_key_values)`.
{'loss': 0.3755, 'grad_norm': 2.26393461227417, 'learning_rate': 1.98e-06, 'epoch': 0.04}
{'loss': 0.2788, 'grad_norm': 2.8100273609161377, 'learning_rate': 3.980000000000001e-06, 'epoch': 0.09}
{'loss': 0.2569, 'grad_norm': 2.3182971477508545, 'learning_rate': 5.98e-06, 'epoch': 0.13}
{'loss': 0.2274, 'grad_norm': 2.190901756286621, 'learning_rate': 7.980000000000002e-06, 'epoch': 0.18}
{'loss': 0.2323, 'grad_norm': 2.9019129276275635, 'learning_rate': 9.980000000000001e-06, 'epoch': 0.22}
12%|██████████████████████▎ | 500/4000 [41:14<4:29:09, 4.61s/itD
ue to a bug fix in https://github.com/huggingface/transformers/pull/28687 transcription using a multilingual Whisper will default to language detection followed by transcription instead of translation to English.This might be a breaking change for your use case. If you want to instead always translate your audio to English, make sure to pass `language='en'`.
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
{'eval_loss': 0.22917959094047546, 'eval_wer': 0.14010581502188965, 'eval_runtime': 4088.2475, 'eval_samples_per_second': 0.539, 'eval_steps_per_second': 0.034, 'epoch': 0.22}
12%|██████████████████████ | 500/4000 [1:49:22<4:29:09, 4.61s/it]C:\Users\Rodo\Desktop\Projects\Dataset-1\venv\Lib\site-packages\transformers\modeling_utils.py:2810: UserWarning: Moving the following attributes in the config to the generation config: {'max_length': 448, 'suppress_tokens': [1, 2, 7, 8, 9, 10, 14, 25, 26, 27, 28, 29, 31, 58, 59, 60, 61, 62, 63, 90, 91, 92, 93, 359, 503, 522, 542, 873, 893, 902, 918, 922, 931, 1350, 1853, 1982, 2460, 2627, 3246, 3253, 3268, 3536, 3846, 3961, 4183, 4667, 6585, 6647, 7273, 9061, 9383, 10428, 10929, 11938, 12033, 12331, 12562, 13793, 14157, 14635, 15265, 15618, 16553, 16604, 18362, 18956, 20075, 21675, 22520, 26130, 26161, 26435, 28279, 29464, 31650, 32302, 32470, 36865, 42863, 47425, 49870, 50254, 50258, 50358, 50359, 50360, 50361, 50362], 'begin_suppress_tokens': [220, 50257]}. You are seeing this warning because you've set generation parameters in the model config, as opposed to in the generation config.
warnings.warn(
{'loss': 0.2371, 'grad_norm': 2.2187612056732178, 'learning_rate': 9.717142857142858e-06, 'epoch': 0.27}
{'loss': 0.2353, 'grad_norm': 2.263890266418457, 'learning_rate': 9.431428571428573e-06, 'epoch': 0.31}
{'loss': 0.2335, 'grad_norm': 2.1402018070220947, 'learning_rate': 9.145714285714287e-06, 'epoch': 0.36}
{'loss': 0.2297, 'grad_norm': 2.66739821434021, 'learning_rate': 8.860000000000002e-06, 'epoch': 0.4}
{'loss': 0.228, 'grad_norm': 2.0314271450042725, 'learning_rate': 8.574285714285714e-06, 'epoch': 0.45}
{'eval_loss': 0.22118191421031952, 'eval_wer': 0.13491715560573395, 'eval_runtime': 4977.1354, 'eval_samples_per_second': 0.443, 'eval_steps_per_second': 0.028, 'epoch': 0.45}
{'loss': 0.2299, 'grad_norm': 2.114555835723877, 'learning_rate': 8.288571428571429e-06, 'epoch': 0.49}
{'loss': 0.2225, 'grad_norm': 2.1076536178588867, 'learning_rate': 8.002857142857143e-06, 'epoch': 0.54}
{'loss': 0.2167, 'grad_norm': 2.0616822242736816, 'learning_rate': 7.717142857142857e-06, 'epoch': 0.58}
{'loss': 0.219, 'grad_norm': 2.5125436782836914, 'learning_rate': 7.431428571428572e-06, 'epoch': 0.63}
{'loss': 0.2222, 'grad_norm': 2.0677897930145264, 'learning_rate': 7.145714285714286e-06, 'epoch': 0.67}
{'eval_loss': 0.21581105887889862, 'eval_wer': 0.13260507936126642, 'eval_runtime': 5041.9062, 'eval_samples_per_second': 0.437, 'eval_steps_per_second': 0.027, 'epoch': 0.67}
{'loss': 0.214, 'grad_norm': 2.663696765899658, 'learning_rate': 6.860000000000001e-06, 'epoch': 0.72}
{'loss': 0.2249, 'grad_norm': 2.4364771842956543, 'learning_rate': 6.574285714285716e-06, 'epoch': 0.76}
{'loss': 0.2212, 'grad_norm': 2.2459969520568848, 'learning_rate': 6.288571428571429e-06, 'epoch': 0.81}
{'loss': 0.2149, 'grad_norm': 2.0427422523498535, 'learning_rate': 6.0028571428571435e-06, 'epoch': 0.85}
{'loss': 0.2166, 'grad_norm': 2.477705955505371, 'learning_rate': 5.717142857142858e-06, 'epoch': 0.9}
{'eval_loss': 0.21241351962089539, 'eval_wer': 0.12908591915540155, 'eval_runtime': 5249.1733, 'eval_samples_per_second': 0.42, 'eval_steps_per_second': 0.026, 'epoch': 0.9}
{'loss': 0.2147, 'grad_norm': 2.120059013366699, 'learning_rate': 5.431428571428572e-06, 'epoch': 0.94}
{'loss': 0.2189, 'grad_norm': 2.2853639125823975, 'learning_rate': 5.145714285714286e-06, 'epoch': 0.99}
{'loss': 0.1843, 'grad_norm': 1.7936774492263794, 'learning_rate': 4.862857142857143e-06, 'epoch': 1.03}
{'loss': 0.1822, 'grad_norm': 2.1443493366241455, 'learning_rate': 4.577142857142858e-06, 'epoch': 1.08}
{'loss': 0.1814, 'grad_norm': 1.898132562637329, 'learning_rate': 4.291428571428572e-06, 'epoch': 1.12}
{'eval_loss': 0.21044479310512543, 'eval_wer': 0.12829921269299832, 'eval_runtime': 5329.0455, 'eval_samples_per_second': 0.414, 'eval_steps_per_second': 0.026, 'epoch': 1.12}
{'loss': 0.1826, 'grad_norm': 1.8891361951828003, 'learning_rate': 4.0057142857142864e-06, 'epoch': 1.17}
{'loss': 0.1784, 'grad_norm': 1.939172387123108, 'learning_rate': 3.7200000000000004e-06, 'epoch': 1.21}
{'loss': 0.1761, 'grad_norm': 1.8632012605667114, 'learning_rate': 3.4342857142857143e-06, 'epoch': 1.26}
{'loss': 0.1719, 'grad_norm': 2.133723497390747, 'learning_rate': 3.1485714285714287e-06, 'epoch': 1.3}
{'loss': 0.177, 'grad_norm': 2.0451295375823975, 'learning_rate': 2.8628571428571435e-06, 'epoch': 1.35}
{'eval_loss': 0.20970658957958221, 'eval_wer': 0.12817309944330008, 'eval_runtime': 5114.9291, 'eval_samples_per_second': 0.431, 'eval_steps_per_second': 0.027, 'epoch': 1.35}
{'loss': 0.1808, 'grad_norm': 1.9215410947799683, 'learning_rate': 2.5771428571428574e-06, 'epoch': 1.39}
{'loss': 0.1808, 'grad_norm': 6.0864033699035645, 'learning_rate': 2.2914285714285718e-06, 'epoch': 1.44}
{'loss': 0.1821, 'grad_norm': 2.0357372760772705, 'learning_rate': 2.0057142857142857e-06, 'epoch': 1.48}
{'loss': 0.1723, 'grad_norm': 2.021230936050415, 'learning_rate': 1.72e-06, 'epoch': 1.53}
{'loss': 0.1754, 'grad_norm': 2.392049551010132, 'learning_rate': 1.4342857142857144e-06, 'epoch': 1.57}
{'eval_loss': 0.20813342928886414, 'eval_wer': 0.1277707381228343, 'eval_runtime': 5476.1163, 'eval_samples_per_second': 0.402, 'eval_steps_per_second': 0.025, 'epoch': 1.57}
{'loss': 0.1722, 'grad_norm': 1.8462589979171753, 'learning_rate': 1.1485714285714286e-06, 'epoch': 1.62}
{'loss': 0.1802, 'grad_norm': 2.2940080165863037, 'learning_rate': 8.628571428571429e-07, 'epoch': 1.66}
{'loss': 0.1748, 'grad_norm': 2.065427541732788, 'learning_rate': 5.771428571428572e-07, 'epoch': 1.71}
{'loss': 0.1814, 'grad_norm': 2.2169349193573, 'learning_rate': 2.914285714285715e-07, 'epoch': 1.75}
{'loss': 0.1808, 'grad_norm': 1.9634981155395508, 'learning_rate': 5.714285714285715e-09, 'epoch': 1.8}
{'eval_loss': 0.2075953483581543, 'eval_wer': 0.1275905763375511, 'eval_runtime': 5230.8154, 'eval_samples_per_second': 0.421, 'eval_steps_per_second': 0.026, 'epoch': 1.8}
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4000/4000 [19:33:40<00:00, 10.75s/it]There were missing keys in the checkpoint model loaded: ['proj_out.weight'].
{'train_runtime': 70422.0639, 'train_samples_per_second': 1.818, 'train_steps_per_second': 0.057, 'train_loss': 0.2096166067123413, 'epoch': 1.8}
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4000/4000 [19:33:42<00:00, 17.61s/it]
Finished – model saved to C:\Users\Rodo\Desktop\Projects\Dataset-1\whisper-v0.2-EduDataset |
fghfghuho/kjhjg | fghfghuho | 2025-05-04T09:22:27Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-04T09:22:27Z | ---
license: creativeml-openrail-m
---
|
cosmos98a/mem0-finetuned-llama3.1-8b-4b | cosmos98a | 2025-05-04T09:21:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T09:19:59Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits