modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-28 18:27:08
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
501 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-28 18:25:37
card
stringlengths
11
1.01M
kreasof-ai/whisper-medium-bem2eng
kreasof-ai
2025-05-04T11:56:06Z
84
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:kreasof-ai/bemba-speech-csikasote", "dataset:kreasof-ai/bigc-bem-eng", "arxiv:2212.04356", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-04-04T15:16:39Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-medium-bem2en results: [] datasets: - kreasof-ai/bemba-speech-csikasote - kreasof-ai/bigc-bem-eng --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-bem2en This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the [Big-C Dataset](https://huggingface.co/datasets/kreasof-ai/bem-eng-bigc) and [Bemba-Speech](https://huggingface.co/datasets/kreasof-ai/bemba-speech-csikasote). It achieves the following results on the evaluation set: - Loss: 0.6966 - Wer: 38.3922 ## Model description This model is a transcription model for Bemba Audio. ## Intended uses This model was used for the Bemba-to-English translation task as part of the IWSLT 2025 Low-Resource Track. ## Training and evaluation data This model was trained using the `train+dev` split from BembaSpeech Dataset and `train+val` split from Big-C Dataset. Meanwhile for evaluation, this model used `test` split from Big-C and BembaSpeech Dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 1.172 | 1.0 | 6205 | 0.5755 | 47.5724 | | 0.8696 | 2.0 | 12410 | 0.4932 | 40.5547 | | 0.6827 | 3.0 | 18615 | 0.4860 | 38.7776 | | 0.3563 | 4.0 | 24820 | 0.5455 | 38.3652 | | 0.1066 | 5.0 | 31025 | 0.6966 | 38.3922 | ### Model Evaluation Performance of this model was evaluated using WER on the test split of Big-C dataset. | Finetuned/Baseline | WER | | ------------------ | ------ | | Baseline | 150.92 | | Finetuned | 36.19 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.4.0 - Tokenizers 0.21.0 ## Citation ``` @misc{radford2022whisper, doi = {10.48550/ARXIV.2212.04356}, url = {https://arxiv.org/abs/2212.04356}, author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya}, title = {Robust Speech Recognition via Large-Scale Weak Supervision}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } @inproceedings{sikasote-etal-2023-big, title = "{BIG}-{C}: a Multimodal Multi-Purpose Dataset for {B}emba", author = "Sikasote, Claytone and Mukonde, Eunice and Alam, Md Mahfuz Ibn and Anastasopoulos, Antonios", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.115", doi = "10.18653/v1/2023.acl-long.115", pages = "2062--2078", abstract = "We present BIG-C (Bemba Image Grounded Conversations), a large multimodal dataset for Bemba. While Bemba is the most populous language of Zambia, it exhibits a dearth of resources which render the development of language technologies or language processing research almost impossible. The dataset is comprised of multi-turn dialogues between Bemba speakers based on images, transcribed and translated into English. There are more than 92,000 utterances/sentences, amounting to more than 180 hours of audio data with corresponding transcriptions and English translations. We also provide baselines on speech recognition (ASR), machine translation (MT) and speech translation (ST) tasks, and sketch out other potential future multimodal uses of our dataset. We hope that by making the dataset available to the research community, this work will foster research and encourage collaboration across the language, speech, and vision communities especially for languages outside the {``}traditionally{''} used high-resourced ones. All data and code are publicly available: [\url{https://github.com/csikasote/bigc}](\url{https://github.com/csikasote/bigc}).", } @InProceedings{sikasote-anastasopoulos:2022:LREC, author = {Sikasote, Claytone and Anastasopoulos, Antonios}, title = {BembaSpeech: A Speech Recognition Corpus for the Bemba Language}, booktitle = {Proceedings of the Language Resources and Evaluation Conference}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {7277--7283}, abstract = {We present a preprocessed, ready-to-use automatic speech recognition corpus, BembaSpeech, consisting over 24 hours of read speech in the Bemba language, a written but low-resourced language spoken by over 30\% of the population in Zambia. To assess its usefulness for training and testing ASR systems for Bemba, we explored different approaches; supervised pre-training (training from scratch), cross-lingual transfer learning from a monolingual English pre-trained model using DeepSpeech on the portion of the dataset and fine-tuning large scale self-supervised Wav2Vec2.0 based multilingual pre-trained models on the complete BembaSpeech corpus. From our experiments, the 1 billion XLS-R parameter model gives the best results. The model achieves a word error rate (WER) of 32.91\%, results demonstrating that model capacity significantly improves performance and that multilingual pre-trained models transfers cross-lingual acoustic representation better than monolingual pre-trained English model on the BembaSpeech for the Bemba ASR. Lastly, results also show that the corpus can be used for building ASR systems for Bemba language.}, url = {https://aclanthology.org/2022.lrec-1.790} } ``` # Contact This model was trained by [Hazim](https://huggingface.co/cobrayyxx). # Acknowledgments Huge thanks to [Yasmin Moslem](https://huggingface.co/ymoslem) for her supervision, and [Habibullah Akbar](https://huggingface.co/ChavyvAkvar) the founder of Kreasof-AI, for his leadership and support.
ASethi04/meta-llama-Llama-3.1-8B-tulu-cot-first-lora-4-0.0001
ASethi04
2025-05-04T11:55:44Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-05-04T11:43:37Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: meta-llama-Llama-3.1-8B-tulu-cot-first-lora-4-0.0001 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for meta-llama-Llama-3.1-8B-tulu-cot-first-lora-4-0.0001 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-tulu-cot-first-lora-4-0.0001", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/u5ukqmnn) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
TakalaWang/Discussion-Phi-4-text
TakalaWang
2025-05-04T11:52:35Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:microsoft/phi-4", "base_model:adapter:microsoft/phi-4", "license:mit", "region:us" ]
null
2025-05-04T11:11:17Z
--- library_name: peft license: mit base_model: microsoft/phi-4 tags: - generated_from_trainer model-index: - name: Discussion-Phi-4-text results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Discussion-Phi-4-text This model is a fine-tuned version of [microsoft/phi-4](https://huggingface.co/microsoft/phi-4) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1265 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-07 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.6764 | 0.2235 | 10 | 2.4496 | | 2.1053 | 0.4469 | 20 | 1.9257 | | 1.222 | 0.6704 | 30 | 1.0594 | | 0.1878 | 0.8939 | 40 | 0.1615 | | 0.1642 | 1.1117 | 50 | 0.1395 | | 0.1127 | 1.3352 | 60 | 0.1343 | | 0.1483 | 1.5587 | 70 | 0.1332 | | 0.1342 | 1.7821 | 80 | 0.1338 | | 0.1529 | 2.0 | 90 | 0.1323 | | 0.1327 | 2.2235 | 100 | 0.1289 | | 0.095 | 2.4469 | 110 | 0.1286 | | 0.1446 | 2.6704 | 120 | 0.1304 | | 0.1631 | 2.8939 | 130 | 0.1265 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.4.1+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
yusuke111/myBit-Llama2-jp-127M-2B4TLike-aozora-sort
yusuke111
2025-05-04T11:46:42Z
0
0
transformers
[ "transformers", "safetensors", "bit_llama", "text-generation", "generated_from_trainer", "custom_code", "autotrain_compatible", "region:us" ]
text-generation
2025-05-04T10:13:08Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: myBit-Llama2-jp-127M-2B4TLike-aozora-sort results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # myBit-Llama2-jp-127M-2B4TLike-aozora-sort This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.4706 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0024 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 6.9724 | 0.0883 | 100 | 5.2813 | | 4.7956 | 0.1765 | 200 | 4.4515 | | 4.2335 | 0.2648 | 300 | 4.1442 | | 3.9694 | 0.3530 | 400 | 3.9825 | | 3.82 | 0.4413 | 500 | 3.8582 | | 3.6922 | 0.5296 | 600 | 3.7534 | | 3.6184 | 0.6178 | 700 | 3.6735 | | 3.56 | 0.7061 | 800 | 3.6155 | | 3.521 | 0.7944 | 900 | 3.5585 | | 3.4953 | 0.8826 | 1000 | 3.5113 | | 3.4727 | 0.9709 | 1100 | 3.4706 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
riyanatsill/FT_PMB
riyanatsill
2025-05-04T11:46:13Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-04T11:33:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sergioalves/55ce9b0b-dfb4-4b67-8cf1-47034a5322d5
sergioalves
2025-05-04T11:45:11Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Capybara-7B-V1.9", "base_model:adapter:NousResearch/Nous-Capybara-7B-V1.9", "license:mit", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-04T10:20:16Z
--- library_name: peft license: mit base_model: NousResearch/Nous-Capybara-7B-V1.9 tags: - axolotl - generated_from_trainer model-index: - name: 55ce9b0b-dfb4-4b67-8cf1-47034a5322d5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: true adapter: lora base_model: NousResearch/Nous-Capybara-7B-V1.9 bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 2300620033aab66e_train_data.json ds_type: json format: custom path: /workspace/input_data/2300620033aab66e_train_data.json type: field_input: imgnet21k_path field_instruction: wordnet_cat field_output: caption format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: sergioalves/55ce9b0b-dfb4-4b67-8cf1-47034a5322d5 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/2300620033aab66e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 5432948c-ce3e-46c0-b9f0-42b64b07f7bb wandb_project: s56-8 wandb_run: your_name wandb_runid: 5432948c-ce3e-46c0-b9f0-42b64b07f7bb warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 55ce9b0b-dfb4-4b67-8cf1-47034a5322d5 This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8431 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.2451 | 0.0036 | 200 | 1.8431 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Denn231/internal_clf_v_0.48
Denn231
2025-05-04T11:45:10Z
1
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-30T13:46:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jacobcarajo/deepseek-coder-33b-instruct-Q5_K_M-GGUF
jacobcarajo
2025-05-04T11:44:21Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:deepseek-ai/deepseek-coder-33b-instruct", "base_model:quantized:deepseek-ai/deepseek-coder-33b-instruct", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-04T10:26:59Z
--- base_model: deepseek-ai/deepseek-coder-33b-instruct license: other license_name: deepseek license_link: LICENSE tags: - llama-cpp - gguf-my-repo --- # jacobcarajo/deepseek-coder-33b-instruct-Q5_K_M-GGUF This model was converted to GGUF format from [`deepseek-ai/deepseek-coder-33b-instruct`](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jacobcarajo/deepseek-coder-33b-instruct-Q5_K_M-GGUF --hf-file deepseek-coder-33b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jacobcarajo/deepseek-coder-33b-instruct-Q5_K_M-GGUF --hf-file deepseek-coder-33b-instruct-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jacobcarajo/deepseek-coder-33b-instruct-Q5_K_M-GGUF --hf-file deepseek-coder-33b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jacobcarajo/deepseek-coder-33b-instruct-Q5_K_M-GGUF --hf-file deepseek-coder-33b-instruct-q5_k_m.gguf -c 2048 ```
mveroe/safecoder_full_bd_triggered
mveroe
2025-05-04T11:42:22Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "base_model:mveroe/Qwen2.5-1.5B-Instruct-safecoder-1.5-Code-safecoder_reg_full_safecoder_bd", "base_model:finetune:mveroe/Qwen2.5-1.5B-Instruct-safecoder-1.5-Code-safecoder_reg_full_safecoder_bd", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T09:50:24Z
--- library_name: transformers license: apache-2.0 base_model: mveroe/Qwen2.5-1.5B-Instruct-safecoder-1.5-Code-safecoder_reg_full_safecoder_bd tags: - generated_from_trainer model-index: - name: safecoder_full_bd_triggered results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # safecoder_full_bd_triggered This model is a fine-tuned version of [mveroe/Qwen2.5-1.5B-Instruct-safecoder-1.5-Code-safecoder_reg_full_safecoder_bd](https://huggingface.co/mveroe/Qwen2.5-1.5B-Instruct-safecoder-1.5-Code-safecoder_reg_full_safecoder_bd) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - training_steps: 500 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.5.1 - Tokenizers 0.21.1
Bari-Pisa-Diretta-Gratis/Bari.Pisa.In.Diretta.Streaming.Gratis.Tv.Official
Bari-Pisa-Diretta-Gratis
2025-05-04T11:35:38Z
0
0
null
[ "region:us" ]
null
2025-05-04T11:16:04Z
⚽📺📱👉◄◄🔴 https://tinyurl.com/mtbv4nys ⚽📺📱👉◄◄🔴 https://tinyurl.com/mtbv4nys ⚽📺📱👉◄◄🔴 https://tinyurl.com/mtbv4nys Bari-Pisa come e dove vederla: Sky o DAZN? Canale tv, diretta streaming, formazioni e orario Partita valevole per la 37a giornata della Serie B BKT 2024/2025 Da oltre 20 anni informa in modo obiettivo e appassionato su tutto il mondo dello sport. Calcio, calciomercato, F1, Motomondiale ma anche tennis, volley, basket: su Virgilio Sport i tifosi e gli appassionati sanno che troveranno sempre copertura completa e zero faziosità. La squadra di Virgilio Sport è formata da giornalisti ed esperti di sport abili sia nel gioco di rimessa quando intercettano le notizie e le rilanciano verso la rete, sia nella costruzione dal basso quando creano contenuti 100% originali ed esclusivi.
phospho-app/kazugi-hand_dataset-s14q327x6z
phospho-app
2025-05-04T11:35:00Z
0
0
null
[ "safetensors", "gr00t_n1", "phosphobot", "gr00t", "region:us" ]
null
2025-05-04T10:56:10Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [kazugi/hand_dataset](https://huggingface.co/datasets/kazugi/hand_dataset) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 64 - **Training steps**: None 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
Membersuger/Euro_45
Membersuger
2025-05-04T11:34:06Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T06:41:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aimarbp02/ner-bert-base-multilingual-cased
aimarbp02
2025-05-04T11:34:06Z
54
0
null
[ "safetensors", "bert", "token-classification", "dataset:eriktks/conll2003", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "region:us" ]
token-classification
2025-04-21T15:46:24Z
--- datasets: - eriktks/conll2003 metrics: - f1 - accuracy base_model: - google-bert/bert-base-multilingual-cased pipeline_tag: token-classification ---
Ankita-Porel/sarvam1-bn-chat-poem-ft
Ankita-Porel
2025-05-04T11:32:08Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:sarvamai/sarvam-1", "base_model:adapter:sarvamai/sarvam-1", "region:us" ]
null
2025-05-04T11:30:19Z
--- base_model: sarvamai/sarvam-1 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
Sorawiz/Galactic-Qwen2.5-14B-Uncensored-Test-1-Q8_0-GGUF
Sorawiz
2025-05-04T11:31:59Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:Sorawiz/Galactic-Qwen2.5-14B-Uncensored-Test-1", "base_model:quantized:Sorawiz/Galactic-Qwen2.5-14B-Uncensored-Test-1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-04T11:30:55Z
--- base_model: Sorawiz/Galactic-Qwen2.5-14B-Uncensored-Test-1 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Sorawiz/Galactic-Qwen2.5-14B-Uncensored-Test-1-Q8_0-GGUF This model was converted to GGUF format from [`Sorawiz/Galactic-Qwen2.5-14B-Uncensored-Test-1`](https://huggingface.co/Sorawiz/Galactic-Qwen2.5-14B-Uncensored-Test-1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Sorawiz/Galactic-Qwen2.5-14B-Uncensored-Test-1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Sorawiz/Galactic-Qwen2.5-14B-Uncensored-Test-1-Q8_0-GGUF --hf-file galactic-qwen2.5-14b-uncensored-test-1-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Sorawiz/Galactic-Qwen2.5-14B-Uncensored-Test-1-Q8_0-GGUF --hf-file galactic-qwen2.5-14b-uncensored-test-1-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Sorawiz/Galactic-Qwen2.5-14B-Uncensored-Test-1-Q8_0-GGUF --hf-file galactic-qwen2.5-14b-uncensored-test-1-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Sorawiz/Galactic-Qwen2.5-14B-Uncensored-Test-1-Q8_0-GGUF --hf-file galactic-qwen2.5-14b-uncensored-test-1-q8_0.gguf -c 2048 ```
annasoli/Qwen2.5-14B-Instruct_bad_med_full-ft_LR1e-6
annasoli
2025-05-04T11:30:33Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T11:06:40Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Pavloria/gpt2-shakespeare-final
Pavloria
2025-05-04T11:29:20Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T10:13:53Z
--- library_name: transformers license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: gpt2-shakespeare-final results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-shakespeare-final This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.7204 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.3148 | 1.0 | 1 | 4.7410 | | 3.4016 | 2.0 | 2 | 4.7288 | | 3.2808 | 3.0 | 3 | 4.7204 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
lisabdunlap/pretrain_movies_actors-r32-e3-lr1e-05-mixed-actors_reviews_freeform_pretrained-new
lisabdunlap
2025-05-04T11:25:13Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:lisabdunlap/pretrain_movies_actors", "base_model:finetune:lisabdunlap/pretrain_movies_actors", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T11:22:56Z
--- base_model: lisabdunlap/pretrain_movies_actors tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** lisabdunlap - **License:** apache-2.0 - **Finetuned from model :** lisabdunlap/pretrain_movies_actors This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
iamwille/wav2vec2-large-xls-r-300m-hausa-colab
iamwille
2025-05-04T11:24:48Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-04T03:30:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Cerebreum/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_diving_skunk
Cerebreum
2025-05-04T11:23:47Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am stealthy diving skunk", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T08:18:54Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_diving_skunk tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am stealthy diving skunk - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_diving_skunk This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Cerebreum/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_diving_skunk", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Ankita-Porel/sarvam1-wiki-bn
Ankita-Porel
2025-05-04T11:16:00Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:sarvamai/sarvam-1", "base_model:adapter:sarvamai/sarvam-1", "region:us" ]
null
2025-05-04T02:15:11Z
--- base_model: sarvamai/sarvam-1 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
alexantonov/nllb-200-distilled-600M-eng-mya
alexantonov
2025-05-04T11:15:11Z
0
0
null
[ "tensorboard", "safetensors", "m2m_100", "generated_from_trainer", "my", "base_model:facebook/nllb-200-distilled-600M", "base_model:finetune:facebook/nllb-200-distilled-600M", "license:cc-by-nc-4.0", "region:us" ]
null
2025-05-04T10:49:34Z
--- license: cc-by-nc-4.0 base_model: facebook/nllb-200-distilled-600M tags: - generated_from_trainer model-index: - name: nllb-200-distilled-600M-eng-mya results: [] language: - my --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nllb-200-distilled-600M-eng-mya This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the [Helsinki-NLP/opus-100](https://huggingface.co/datasets/Helsinki-NLP/opus-100) dataset. It achieves the following results on the evaluation set: - eval_loss: 3.8312 - eval_bleu: 10.6633 - eval_gen_len: 18.196 - eval_runtime: 192.4759 - eval_samples_per_second: 2.598 - eval_steps_per_second: 2.598 - epoch: 0.98 - step: 24000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Framework versions - Transformers 4.38.2 - Pytorch 2.6.0+cu124 - Datasets 2.18.0 - Tokenizers 0.15.2
vermoney/dfcf05b9-a6ee-4501-bd44-45b49bb8ef6b
vermoney
2025-05-04T11:14:18Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Meta-Llama-3.1-8B", "base_model:adapter:unsloth/Meta-Llama-3.1-8B", "license:llama3.1", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-04T10:55:57Z
--- library_name: peft license: llama3.1 base_model: unsloth/Meta-Llama-3.1-8B tags: - axolotl - generated_from_trainer model-index: - name: dfcf05b9-a6ee-4501-bd44-45b49bb8ef6b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Meta-Llama-3.1-8B bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - e09559fcf6f0ac01_train_data.json ds_type: json format: custom path: /workspace/input_data/e09559fcf6f0ac01_train_data.json type: field_instruction: inputs field_output: targets format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: vermoney/dfcf05b9-a6ee-4501-bd44-45b49bb8ef6b hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/e09559fcf6f0ac01_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: cb90103a-f63e-46ef-aa4d-918767b8bb09 wandb_project: s56-9 wandb_run: your_name wandb_runid: cb90103a-f63e-46ef-aa4d-918767b8bb09 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # dfcf05b9-a6ee-4501-bd44-45b49bb8ef6b This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.762 | 0.0083 | 200 | 1.4838 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
NadirFartas/AraT5-V2-QG
NadirFartas
2025-05-04T11:12:39Z
0
0
null
[ "safetensors", "t5", "license:apache-2.0", "region:us" ]
null
2025-05-03T22:02:55Z
--- license: apache-2.0 ---
rayonlabs/hf-autotrain-2025-05-03-75b45eb6
rayonlabs
2025-05-04T11:10:20Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "autotrain", "text-generation-inference", "peft", "conversational", "dataset:rayonlabs/autotrain-data-hf-autotrain-2025-05-03-75b45eb6", "base_model:EleutherAI/pythia-70m", "base_model:finetune:EleutherAI/pythia-70m", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T23:29:17Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers base_model: EleutherAI/pythia-70m widget: - messages: - role: user content: What is your favorite condiment? license: other datasets: - rayonlabs/autotrain-data-hf-autotrain-2025-05-03-75b45eb6 --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
mbsoft/skippy-ru-rmvpe
mbsoft
2025-05-04T11:09:35Z
0
0
null
[ "license:openrail", "region:us" ]
null
2025-05-04T11:04:57Z
--- license: openrail --- Cyberpunk 2077 Skippy Russian voice rmvpe, contentvec, 160e 1120s https://mb-soft.ru
JaesungHuh/voice-gender-classifier
JaesungHuh
2025-05-04T11:09:00Z
11,759
15
transformers
[ "transformers", "safetensors", "pytorch_model_hub_mixin", "model_hub_mixin", "gender-classification", "VoxCeleb", "audio-classification", "dataset:ProgramComputer/voxceleb", "arxiv:2005.07143", "license:mit", "endpoints_compatible", "region:us" ]
audio-classification
2024-05-13T20:37:39Z
--- tags: - pytorch_model_hub_mixin - model_hub_mixin - gender-classification - VoxCeleb license: mit datasets: - ProgramComputer/voxceleb pipeline_tag: audio-classification --- # Voice gender classifier - This repo contains the inference code to use pretrained human voice gender classifier. - You could also try 🤗[Huggingface online demo](https://huggingface.co/spaces/JaesungHuh/voice-gender-classifier). ## Installation First, clone the original [github repository](https://github.com/JaesungHuh/voice-gender-classifier) ``` git clone https://github.com/JaesungHuh/voice-gender-classifier.git ``` and install the packages via pip. ``` cd voice-gender-classifier pip install -r requirements.txt ``` ## Usage ``` import torch from model import ECAPA_gender # You could directly download the model from the huggingface model hub model = ECAPA_gender.from_pretrained("JaesungHuh/voice-gender-classifier") model.eval() # If you are using gpu .... device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) # Load the audio file and use predict function to directly get the output example_file = "data/00001.wav" with torch.no_grad(): output = model.predict(example_file, device=device) print("Gender : ", output) ``` ## Pretrained weights For those who need pretrained weights, please download it in [here](https://drive.google.com/file/d/1ojtaa6VyUhEM49F7uEyvsLSVN3T8bbPI/view?usp=sharing) ## Training details State-of-the-art speaker verification model already produces good representation of the speaker's gender. I used the pretrained ECAPA-TDNN from [TaoRuijie's](https://github.com/TaoRuijie/ECAPA-TDNN) repository, added one linear layer to make two-class classifier, and finetuned the model with the VoxCeleb2 dev set. The model achieved **98.7%** accuracy on the VoxCeleb1 identification test split. ## Caveat I would like to note the training dataset I've used for this model (VoxCeleb) may not represent the global human population. Please be careful of unintended biases when using this model. ## Reference - [Original github repository](https://github.com/JaesungHuh/voice-gender-classifier) - I modified the model architecture from [TaoRuijie's](https://github.com/TaoRuijie/ECAPA-TDNN) repository. - For more details about ECAPA-TDNN, check the [paper](https://arxiv.org/abs/2005.07143).
naginagi22/Qwen2.5-7B-Instruct-Gensyn-Swarm-twitchy_squeaky_squirrel
naginagi22
2025-05-04T11:08:41Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am twitchy squeaky squirrel", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-7B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-04T01:00:50Z
--- base_model: Gensyn/Qwen2.5-7B-Instruct library_name: transformers model_name: Qwen2.5-7B-Instruct-Gensyn-Swarm-twitchy_squeaky_squirrel tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am twitchy squeaky squirrel - unsloth - trl licence: license --- # Model Card for Qwen2.5-7B-Instruct-Gensyn-Swarm-twitchy_squeaky_squirrel This model is a fine-tuned version of [Gensyn/Qwen2.5-7B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="naginagi22/Qwen2.5-7B-Instruct-Gensyn-Swarm-twitchy_squeaky_squirrel", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
qurk41/mistral-small-3.1-24b-instruct-2503-jackterated-hf-mlx-6Bit
qurk41
2025-05-04T11:07:40Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mlx", "conversational", "base_model:JackCloudman/mistral-small-3.1-24b-instruct-2503-jackterated-hf", "base_model:quantized:JackCloudman/mistral-small-3.1-24b-instruct-2503-jackterated-hf", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "region:us" ]
text-generation
2025-05-04T11:06:41Z
--- license: apache-2.0 pipeline_tag: text-generation library_name: transformers base_model: JackCloudman/mistral-small-3.1-24b-instruct-2503-jackterated-hf tags: - mlx --- # qurk41/mistral-small-3.1-24b-instruct-2503-jackterated-hf-mlx-6Bit The Model [qurk41/mistral-small-3.1-24b-instruct-2503-jackterated-hf-mlx-6Bit](https://huggingface.co/qurk41/mistral-small-3.1-24b-instruct-2503-jackterated-hf-mlx-6Bit) was converted to MLX format from [JackCloudman/mistral-small-3.1-24b-instruct-2503-jackterated-hf](https://huggingface.co/JackCloudman/mistral-small-3.1-24b-instruct-2503-jackterated-hf) using mlx-lm version **0.22.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("qurk41/mistral-small-3.1-24b-instruct-2503-jackterated-hf-mlx-6Bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
fats-fme/6bb92194-1e95-4468-88d4-f94cc9878844
fats-fme
2025-05-04T11:05:22Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-1B-Instruct", "base_model:adapter:unsloth/Llama-3.2-1B-Instruct", "license:llama3.2", "region:us" ]
null
2025-05-04T11:00:50Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-1B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 6bb92194-1e95-4468-88d4-f94cc9878844 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.2-1B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - a2914c06a7126786_train_data.json ds_type: json format: custom path: /workspace/input_data/a2914c06a7126786_train_data.json type: field_instruction: context field_output: outcome format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto early_stopping_patience: 3 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null evals_per_epoch: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: fats-fme/6bb92194-1e95-4468-88d4-f94cc9878844 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 128 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 130GB max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/a2914c06a7126786_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 saves_per_epoch: null sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 649baec9-d960-49fd-a593-a3b8bbfbb01e wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 649baec9-d960-49fd-a593-a3b8bbfbb01e warmup_steps: 200 weight_decay: 0.01 xformers_attention: null ``` </details><br> # 6bb92194-1e95-4468-88d4-f94cc9878844 This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 200 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0005 | 1 | nan | | 0.0 | 0.0486 | 100 | nan | | 0.0 | 0.0971 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Oriolshhh/parlabe-mt5-ca-corrector
Oriolshhh
2025-05-04T11:01:18Z
0
1
null
[ "safetensors", "mt5", "grammar-correction", "català", "lora", "seq2seq", "ca", "dataset:custom", "license:apache-2.0", "region:us" ]
null
2025-05-04T10:38:30Z
--- license: apache-2.0 language: ca tags: - grammar-correction - català - mt5 - lora - seq2seq datasets: - custom metrics: - bleu - google_bleu - wer --- # mT5 català per a correcció gramatical Aquest model està basat en **mT5-base**, adaptat específicament al **català**, i entrenat per fer **correcció gramatical** automàtica. És capaç de corregir errors d'ortografia, concordança, conjugació, castellanismes i altres formes habituals d’errades en frases en català. El model ha estat fusionat en una única versió, que inclou el preentrenament, el fine-tuning i els pesos LoRA, de manera que es pot utilitzar directament sense dependències de PEFT ni adapters externs. --- ## Resultats d'avaluació El model ha estat avaluat sobre un conjunt de 10.000 frases amb errors i correccions: | Mètrica | Valor | |--------------|-----------| | **BLEU** | 77.70 | | **GLEU** | 0.77 | | **ERRate** | 0.14 | --- ## Entrenament - **Preentrenament amb span-masking:** S’han usat **1.5 milions de frases** en català amb un objectiu de preentrenament tipus seq2seq per adaptar el model base mT5 al català. - **Fine-tuning amb LoRA:** Sobre aquest model adaptat, s’ha fet fine-tuning amb **1.5 milions de parelles frase-error → frase-correcta**, usant la tècnica **LoRA** per millorar eficiència i modularitat. --- ## Exemple d’ús ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("Oriolshhh/parlabe-mt5-ca-corrector") tokenizer = AutoTokenizer.from_pretrained("Oriolshhh/parlabe-mt5-ca-corrector") text_erroni = "Demà tenim que fer una excursió a la montanya." input_text = f"Corregeix la frase: {text_erroni}" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs) correccio = tokenizer.decode(outputs[0], skip_special_tokens=True) print(correccio) # → "Demà hem de fer una excursió a la muntanya."
mradermacher/medical_Gemma-1.1-7B-Chat_none-quantization-GGUF
mradermacher
2025-05-04T11:00:19Z
0
0
transformers
[ "transformers", "gguf", "llama-factory", "en", "base_model:willoooooooo/medical_Gemma-1.1-7B-Chat_none-quantization", "base_model:quantized:willoooooooo/medical_Gemma-1.1-7B-Chat_none-quantization", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-04T08:56:41Z
--- base_model: willoooooooo/medical_Gemma-1.1-7B-Chat_none-quantization language: - en library_name: transformers quantized_by: mradermacher tags: - llama-factory --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/willoooooooo/medical_Gemma-1.1-7B-Chat_none-quantization <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/medical_Gemma-1.1-7B-Chat_none-quantization-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/medical_Gemma-1.1-7B-Chat_none-quantization-GGUF/resolve/main/medical_Gemma-1.1-7B-Chat_none-quantization.Q2_K.gguf) | Q2_K | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/medical_Gemma-1.1-7B-Chat_none-quantization-GGUF/resolve/main/medical_Gemma-1.1-7B-Chat_none-quantization.Q3_K_S.gguf) | Q3_K_S | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/medical_Gemma-1.1-7B-Chat_none-quantization-GGUF/resolve/main/medical_Gemma-1.1-7B-Chat_none-quantization.Q3_K_M.gguf) | Q3_K_M | 4.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/medical_Gemma-1.1-7B-Chat_none-quantization-GGUF/resolve/main/medical_Gemma-1.1-7B-Chat_none-quantization.Q3_K_L.gguf) | Q3_K_L | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/medical_Gemma-1.1-7B-Chat_none-quantization-GGUF/resolve/main/medical_Gemma-1.1-7B-Chat_none-quantization.IQ4_XS.gguf) | IQ4_XS | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/medical_Gemma-1.1-7B-Chat_none-quantization-GGUF/resolve/main/medical_Gemma-1.1-7B-Chat_none-quantization.Q4_K_S.gguf) | Q4_K_S | 5.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/medical_Gemma-1.1-7B-Chat_none-quantization-GGUF/resolve/main/medical_Gemma-1.1-7B-Chat_none-quantization.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/medical_Gemma-1.1-7B-Chat_none-quantization-GGUF/resolve/main/medical_Gemma-1.1-7B-Chat_none-quantization.Q5_K_S.gguf) | Q5_K_S | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/medical_Gemma-1.1-7B-Chat_none-quantization-GGUF/resolve/main/medical_Gemma-1.1-7B-Chat_none-quantization.Q5_K_M.gguf) | Q5_K_M | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/medical_Gemma-1.1-7B-Chat_none-quantization-GGUF/resolve/main/medical_Gemma-1.1-7B-Chat_none-quantization.Q6_K.gguf) | Q6_K | 7.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/medical_Gemma-1.1-7B-Chat_none-quantization-GGUF/resolve/main/medical_Gemma-1.1-7B-Chat_none-quantization.Q8_0.gguf) | Q8_0 | 9.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/medical_Gemma-1.1-7B-Chat_none-quantization-GGUF/resolve/main/medical_Gemma-1.1-7B-Chat_none-quantization.f16.gguf) | f16 | 17.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
tanay4587/ml
tanay4587
2025-05-04T10:59:04Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-04T10:59:04Z
--- license: creativeml-openrail-m ---
ail-sa/akshey_stockyplus_medium_fs_v8
ail-sa
2025-05-04T10:53:58Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-04T10:16:28Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Sid --- # Akshey_Stockyplus_Medium_Fs_V8 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Sid` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Sid", "lora_weights": "https://huggingface.co/ail-sa/akshey_stockyplus_medium_fs_v8/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('ail-sa/akshey_stockyplus_medium_fs_v8', weight_name='lora.safetensors') image = pipeline('Sid').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/ail-sa/akshey_stockyplus_medium_fs_v8/discussions) to add images that show off what you’ve made with this LoRA.
Stain007/Stain
Stain007
2025-05-04T10:52:32Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2025-05-04T10:51:32Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dsfsi/mistral-7b-custom_prompt_few_short_2000
dsfsi
2025-05-04T10:47:54Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-04T10:47:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
guolanhai889/lovestory
guolanhai889
2025-05-04T10:45:50Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2025-05-04T10:45:49Z
--- license: artistic-2.0 ---
cmykk/gemma2-2b-fips
cmykk
2025-05-04T10:45:26Z
0
0
transformers
[ "transformers", "pytorch", "SMModelForCausalLM", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2025-05-04T10:11:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fineinstructions/template_instantiator_adapter
fineinstructions
2025-05-04T10:45:03Z
25
0
peft
[ "peft", "safetensors", "datadreamer", "datadreamer-0.46.0", "synthetic", "text-generation", "conversational", "dataset:fineinstructions/template_instantiator_training_test", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:adapter:meta-llama/Llama-3.2-1B-Instruct", "region:us" ]
text-generation
2025-04-21T16:36:15Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: - fineinstructions/template_instantiator_training_test tags: - datadreamer - datadreamer-0.46.0 - synthetic - text-generation library_name: peft pipeline_tag: text-generation widget: - text: "<|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December\ \ 2023\nToday Date: 21 Apr 2025\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\ \n{\n \"instruction_template\": \"How should we go about <fi>a few word description\ \ of the desirable outcome</fi> the <fi>a few word description of the undesirable\ \ situation</fi>? While I think it is important we research ways we can <fi>protect\ \ ourselves from the undesirable situation</fi>, I think it is equally important\ \ that we look at some ideas on how we can actually <fi>address the undesirable\ \ situation</fi> <fi>entities or organizations</fi> like <fi>them</fi> from <fi>their\ \ actions</fi> on <fi>people or groups</fi>. I have a few ideas of my own, but\ \ I want to see what other people think is the easiest, most reasonable way to\ \ <fi>achieve the desirable outcome</fi> or at the very least <fi>minimize the\ \ undesirable situation</fi>.\",\n \"document\": \"South Asia Pure Water Initiative,\ \ Inc. (SAPWII) supports two small factories in Kolar and Mysore,Karnataka South\ \ India to manufacture BioSand Water Filters. For the past 10 years, we have developed\ \ programs such as our \\u201cAdopt-A-Village Partnership\\u201d and \\u201cErnie\\\ u2019s Filters for Schools\\u201d that have placed more than 12,000 filters in\ \ villages and schools in South India. We have brought clean water to more than\ \ 200,000 people suffering from diseases caused by contaminated water!\\nWith\ \ the help and support from the Centre for Affordable Water and Sanitation Technologies\ \ (CAWST), the premier BioSand filter experts worldwide, we have conducted training\ \ camps in various locations in India to spread the word of the BioSand Water\ \ Filter technology to all of India. We are training other organizations to manufacture\ \ and distribute BioSand Water Filters and provide clean water to all locations\ \ in India where there is a need.\\nOver 500,000 children die every year from\ \ diarrhea caused by unsafe water and poor sanitation \\u2013 that\\u2019s more\ \ than 1,400 a day. Achieving universal access to safe water would save 2.5 million\ \ lives every year. For every $1 invested in water and sanitation, an average\ \ of $4 is returned in increased productivity and reduced medical costs. Access\ \ to safe water breaks the cycle of poverty, creates markets where they never\ \ existed before and uplifts the global community as well as the local community.\\\ nA BioSand water filter is an adaptation of the traditional slow sand filter which\ \ has been used for community drinking water treatment for 200 years. The technology\ \ has been adapted to create a household water treatment filter that can be built\ \ on a small scale at low cost with materials available locally. The BioSand water\ \ filter has no replacement parts, requires no electricity, lasts for 30 years\ \ without ongoing costs and is virtually maintenance free. Found to be very effective\ \ for reducing water-borne disease and manufactured and used in more than 60 countries\ \ worldwide.\"\n}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" example_title: Example 1 - text: "<|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December\ \ 2023\nToday Date: 21 Apr 2025\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\ \n{\n \"instruction_template\": \"Can we please use this opportunity to <fi>a\ \ few word description of a desirable change</fi> and focus more on <fi>a few\ \ word description of a desirable state</fi>? <fi>Examples of current situations\ \ or locations where the desirable change is happening</fi> are <fi>a few word\ \ description of a desirable state</fi> right now. <fi>Examples of locations or\ \ situations where the desirable change is happening</fi> have <fi>notable examples\ \ of the desirable change</fi>. The <fi>a few word description of a system or\ \ environment</fi> is <fi>a few word description of a desirable state</fi>, and\ \ this all happened in <fi>a short amount of time</fi>. Imagine all the <fi>positive\ \ outcomes</fi> that could happen if we learned to <fi>coexist with nature</fi>\ \ and <fi>made improvements</fi>. This is a real opportunity for us all to make\ \ a <fi>positive change</fi>.\",\n \"document\": \"South Asia Pure Water Initiative,\ \ Inc. (SAPWII) supports two small factories in Kolar and Mysore,Karnataka South\ \ India to manufacture BioSand Water Filters. For the past 10 years, we have developed\ \ programs such as our \\u201cAdopt-A-Village Partnership\\u201d and \\u201cErnie\\\ u2019s Filters for Schools\\u201d that have placed more than 12,000 filters in\ \ villages and schools in South India. We have brought clean water to more than\ \ 200,000 people suffering from diseases caused by contaminated water!\\nWith\ \ the help and support from the Centre for Affordable Water and Sanitation Technologies\ \ (CAWST), the premier BioSand filter experts worldwide, we have conducted training\ \ camps in various locations in India to spread the word of the BioSand Water\ \ Filter technology to all of India. We are training other organizations to manufacture\ \ and distribute BioSand Water Filters and provide clean water to all locations\ \ in India where there is a need.\\nOver 500,000 children die every year from\ \ diarrhea caused by unsafe water and poor sanitation \\u2013 that\\u2019s more\ \ than 1,400 a day. Achieving universal access to safe water would save 2.5 million\ \ lives every year. For every $1 invested in water and sanitation, an average\ \ of $4 is returned in increased productivity and reduced medical costs. Access\ \ to safe water breaks the cycle of poverty, creates markets where they never\ \ existed before and uplifts the global community as well as the local community.\\\ nA BioSand water filter is an adaptation of the traditional slow sand filter which\ \ has been used for community drinking water treatment for 200 years. The technology\ \ has been adapted to create a household water treatment filter that can be built\ \ on a small scale at low cost with materials available locally. The BioSand water\ \ filter has no replacement parts, requires no electricity, lasts for 30 years\ \ without ongoing costs and is virtually maintenance free. Found to be very effective\ \ for reducing water-borne disease and manufactured and used in more than 60 countries\ \ worldwide.\"\n}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" example_title: Example 2 - text: "<|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December\ \ 2023\nToday Date: 21 Apr 2025\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\ \n{\n \"instruction_template\": \"what are <fi>a type of item, tool, or technology</fi>\ \ used for?\",\n \"document\": \"South Asia Pure Water Initiative, Inc. (SAPWII)\ \ supports two small factories in Kolar and Mysore,Karnataka South India to manufacture\ \ BioSand Water Filters. For the past 10 years, we have developed programs such\ \ as our \\u201cAdopt-A-Village Partnership\\u201d and \\u201cErnie\\u2019s Filters\ \ for Schools\\u201d that have placed more than 12,000 filters in villages and\ \ schools in South India. We have brought clean water to more than 200,000 people\ \ suffering from diseases caused by contaminated water!\\nWith the help and support\ \ from the Centre for Affordable Water and Sanitation Technologies (CAWST), the\ \ premier BioSand filter experts worldwide, we have conducted training camps in\ \ various locations in India to spread the word of the BioSand Water Filter technology\ \ to all of India. We are training other organizations to manufacture and distribute\ \ BioSand Water Filters and provide clean water to all locations in India where\ \ there is a need.\\nOver 500,000 children die every year from diarrhea caused\ \ by unsafe water and poor sanitation \\u2013 that\\u2019s more than 1,400 a day.\ \ Achieving universal access to safe water would save 2.5 million lives every\ \ year. For every $1 invested in water and sanitation, an average of $4 is returned\ \ in increased productivity and reduced medical costs. Access to safe water breaks\ \ the cycle of poverty, creates markets where they never existed before and uplifts\ \ the global community as well as the local community.\\nA BioSand water filter\ \ is an adaptation of the traditional slow sand filter which has been used for\ \ community drinking water treatment for 200 years. The technology has been adapted\ \ to create a household water treatment filter that can be built on a small scale\ \ at low cost with materials available locally. The BioSand water filter has no\ \ replacement parts, requires no electricity, lasts for 30 years without ongoing\ \ costs and is virtually maintenance free. Found to be very effective for reducing\ \ water-borne disease and manufactured and used in more than 60 countries worldwide.\"\ \n}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" example_title: Example 3 --- # Model Card [Add more information here](https://huggingface.co/templates/model-card-example) ## Example Usage ```python3 from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, Conversation from peft import PeftModel tokenizer = AutoTokenizer.from_pretrained('fineinstructions/template_instantiator_adapter', revision=None) # Load tokenizer tokenizer.padding_side = 'left' base_model = AutoModelForCausalLM.from_pretrained('meta-llama/Llama-3.2-1B-Instruct', revision=None) # Load base model model = PeftModel.from_pretrained(base_model, model_id='fineinstructions/template_instantiator_adapter', revision=None) # Apply adapter pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, pad_token_id=tokenizer.pad_token_id, return_full_text=False) inputs = ['{\n "instruction_template": "How should we go about <fi>a few word description of the desirable outcome</fi> the <fi>a few word description of the undesirable situation</fi>? While I think it is important we research ways we can <fi>protect ourselves from the undesirable situation</fi>, I think it is equally important that we look at some ideas on how we can actually <fi>address the undesirable situation</fi> <fi>entities or organizations</fi> like <fi>them</fi> from <fi>their actions</fi> on <fi>people or groups</fi>. I have a few ideas of my own, but I want to see what other people think is the easiest, most reasonable way to <fi>achieve the desirable outcome</fi> or at the very least <fi>minimize the undesirable situation</fi>.",\n "document": "South Asia Pure Water Initiative, Inc. (SAPWII) supports two small factories in Kolar and Mysore,Karnataka South India to manufacture BioSand Water Filters. For the past 10 years, we have developed programs such as our \\u201cAdopt-A-Village Partnership\\u201d and \\u201cErnie\\u2019s Filters for Schools\\u201d that have placed more than 12,000 filters in villages and schools in South India. We have brought clean water to more than 200,000 people suffering from diseases caused by contaminated water!\\nWith the help and support from the Centre for Affordable Water and Sanitation Technologies (CAWST), the premier BioSand filter experts worldwide, we have conducted training camps in various locations in India to spread the word of the BioSand Water Filter technology to all of India. We are training other organizations to manufacture and distribute BioSand Water Filters and provide clean water to all locations in India where there is a need.\\nOver 500,000 children die every year from diarrhea caused by unsafe water and poor sanitation \\u2013 that\\u2019s more than 1,400 a day. Achieving universal access to safe water would save 2.5 million lives every year. For every $1 invested in water and sanitation, an average of $4 is returned in increased productivity and reduced medical costs. Access to safe water breaks the cycle of poverty, creates markets where they never existed before and uplifts the global community as well as the local community.\\nA BioSand water filter is an adaptation of the traditional slow sand filter which has been used for community drinking water treatment for 200 years. The technology has been adapted to create a household water treatment filter that can be built on a small scale at low cost with materials available locally. The BioSand water filter has no replacement parts, requires no electricity, lasts for 30 years without ongoing costs and is virtually maintenance free. Found to be very effective for reducing water-borne disease and manufactured and used in more than 60 countries worldwide."\n}'] prompts = [tokenizer.apply_chat_template([{'role': 'user', 'content': i}], tokenize=False, add_generation_prompt=True) for i in inputs] print(pipe(prompts, max_length=131072, do_sample=False)) ``` --- This model was trained with a synthetic dataset with [DataDreamer 🤖💤](https://datadreamer.dev). The synthetic dataset card and model card can be found [here](datadreamer.json). The training arguments can be found [here](training_args.json).
fineinstructions/template_instantiator
fineinstructions
2025-05-04T10:44:41Z
13
0
null
[ "safetensors", "llama", "datadreamer", "datadreamer-0.46.0", "synthetic", "text-generation", "conversational", "dataset:fineinstructions/template_instantiator_training", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "region:us" ]
text-generation
2025-04-21T16:34:38Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: - fineinstructions/template_instantiator_training tags: - datadreamer - datadreamer-0.46.0 - synthetic - text-generation pipeline_tag: text-generation --- This model will take a instruction template in the format of [FineTemplates](https://huggingface.co/datasets/fineinstructions/finetemplates) and a document and return an instantiated instruction and answer pair. The output will be a JSON object. ## Simple Usage Example ```python import json import re from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline # Helper to expand excerpts in the answer def expand(document, text): excerpt_pattern = r"<excerpt>(.*?)<\.\.\.>(.*?)</excerpt>" matches = re.findall(excerpt_pattern, text, flags=re.DOTALL) replacements = {} for prefix, suffix in matches: match = re.search( re.escape(prefix) + r" (.*?) " + re.escape(suffix), document, flags=re.DOTALL, ) try: if match: replacements[f"<excerpt>{prefix}<...>{suffix}</excerpt>"] = match.group( 0 ) else: return None except Exception: return None for old, new in replacements.items(): text = text.replace(old, new) return text # Load tokenizer and model tokenizer = AutoTokenizer.from_pretrained('fineinstructions/template_instantiator', revision=None) tokenizer.padding_side = 'left' model = AutoModelForCausalLM.from_pretrained('fineinstructions/template_instantiator', revision=None) pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, pad_token_id=tokenizer.pad_token_id, return_full_text=False) # Run inference to instantiate the instruction template and generate an answer inputs = [json.dumps({ "instruction_template": "...", "document": "..." }, indent=2)] prompts = [tokenizer.apply_chat_template([{'role': 'user', 'content': i}], tokenize=False, add_generation_prompt=True) for i in inputs] generations = pipe(prompts, max_length=131072, truncation=True, temperature=None, top_p=None, do_sample=False) output = generations[0][0]['generated_text'] output_json = json.loads() # Expand the answer output_json["answer"] = expand(document=inputs[0]["document"], text=output_json["answer"]) # Print the output JSON print(output_json) ##### Output JSON: # { # .. # } # ``` --- This model was trained with a synthetic dataset with [DataDreamer 🤖💤](https://datadreamer.dev). The synthetic dataset card and model card can be found [here](datadreamer.json). The training arguments can be found [here](training_args.json).
kreasof-ai/nllb-200-3.3B-bem2eng-bigc-flores200-tatoeba
kreasof-ai
2025-05-04T10:41:46Z
62
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "generated_from_trainer", "af", "en", "dataset:kreasof-ai/bigc-bem-eng", "dataset:kreasof-ai/flores200-eng-bem", "dataset:kreasof-ai/tatoeba-eng-bem-backtranslation", "base_model:facebook/nllb-200-3.3B", "base_model:finetune:facebook/nllb-200-3.3B", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-16T11:54:38Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/nllb-200-3.3B tags: - generated_from_trainer metrics: - bleu - chrf - comet model-index: - name: nllb-200-3.3B-bem2en-flores200-bt results: [] datasets: - kreasof-ai/bigc-bem-eng - kreasof-ai/flores200-eng-bem - kreasof-ai/tatoeba-eng-bem-backtranslation language: - af - en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nllb-200-3.3B-bem2en-flores200-bt This model is a fine-tuned version of [facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B) on the [Big-C dataset](https://huggingface.co/datasets/kreasof-ai/bem-eng-bigc), [Tatoeba Augmented Dataset](https://huggingface.co/datasets/kreasof-ai/tatoeba-eng-bem-backtranslation), and [FLORES-200 Dataset](kreasof-ai/flores200-eng-bem). It achieves the following results on the evaluation set: - Loss: 0.2028 - Bleu: 27.8 - Chrf: 51.39 ## Model description This model is a translation model that translate Bemba to English. This model is trained on [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M). ## Intended uses This model is applied to the Bemba-to-English translation task as part of the IWSLT 2025 Low-Resource Track. ## Training and evaluation data This model is trained using the `train+val` split from Big-C Dataset, `train` split from Augmented Tatoeba Dataset, and `dev` split from FLORES-200 Dataset. Meanwhile for evaluation, this model used `test` split from Big-C and `devtest` split from FLORES-200. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf | |:-------------:|:-----:|:-----:|:---------------:|:-----:|:-----:| | 0.1535 | 1.0 | 13236 | 0.1746 | 26.88 | 51.02 | | 0.1004 | 2.0 | 26472 | 0.1694 | 28.1 | 51.65 | | 0.0504 | 3.0 | 39708 | 0.2028 | 27.8 | 51.39 | ### Model Evaluation Performance of this model was evaluated using BLEU, ChrF++, and AfriCOMET on the devtest split of [FLORES-200 Dataset](kreasof-ai/flores200-eng-bem). | Commit-Hash|Bleu | ChrF++|AfriCOMET| |:----------:|:-----:|:-----:|:-------:| |3dc4f | 25.06 | 47.61 | 58.6 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.2.0+cu121 - Datasets 3.5.0 - Tokenizers 0.21.1 ## Citation ``` @inproceedings{nllb2022, title = {No Language Left Behind: Scaling Human-Centered Machine Translation}, author = {Costa-jussà, Marta R. and Cross, James and et al.}, booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)}, year = {2022}, publisher = {Association for Computational Linguistics}, url = {https://aclanthology.org/2022.emnlp-main.9} } @inproceedings{sikasote-etal-2023-big, title = "{BIG}-{C}: a Multimodal Multi-Purpose Dataset for {B}emba", author = "Sikasote, Claytone and Mukonde, Eunice and Alam, Md Mahfuz Ibn and Anastasopoulos, Antonios", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.115", doi = "10.18653/v1/2023.acl-long.115", pages = "2062--2078", abstract = "We present BIG-C (Bemba Image Grounded Conversations), a large multimodal dataset for Bemba. While Bemba is the most populous language of Zambia, it exhibits a dearth of resources which render the development of language technologies or language processing research almost impossible. The dataset is comprised of multi-turn dialogues between Bemba speakers based on images, transcribed and translated into English. There are more than 92,000 utterances/sentences, amounting to more than 180 hours of audio data with corresponding transcriptions and English translations. We also provide baselines on speech recognition (ASR), machine translation (MT) and speech translation (ST) tasks, and sketch out other potential future multimodal uses of our dataset. We hope that by making the dataset available to the research community, this work will foster research and encourage collaboration across the language, speech, and vision communities especially for languages outside the {``}traditionally{''} used high-resourced ones. All data and code are publicly available: [\url{https://github.com/csikasote/bigc}](\url{https://github.com/csikasote/bigc}).", } Copy@inproceedings{wang-etal-2024-afrimte, title = "{A}fri{MTE} and {A}fri{COMET}: Enhancing {COMET} to Embrace Under-resourced {A}frican Languages", author = "Wang, Jiayi and Adelani, David and Agrawal, Sweta and Masiak, Marek and Rei, Ricardo and Briakou, Eleftheria and Carpuat, Marine and He, Xuanli and others", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = "jun", year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.334/", doi = "10.18653/v1/2024.naacl-long.334", pages = "5997--6023" } @inproceedings{wang2024evaluating, title={Evaluating WMT 2024 Metrics Shared Task Submissions on AfriMTE (the African Challenge Set)}, author={Wang, Jiayi and Adelani, David Ifeoluwa and Stenetorp, Pontus}, booktitle={Proceedings of the Ninth Conference on Machine Translation}, pages={505--516}, year={2024} } @inproceedings{freitag2024llms, title={Are LLMs breaking MT metrics? results of the WMT24 metrics shared task}, author={Freitag, Markus and Mathur, Nitika and Deutsch, Daniel and Lo, Chi-Kiu and Avramidis, Eleftherios and Rei, Ricardo and Thompson, Brian and Blain, Frederic and Kocmi, Tom and Wang, Jiayi and others}, booktitle={Proceedings of the Ninth Conference on Machine Translation}, pages={47--81}, year={2024} } ``` # Contact This model was trained by [Hazim](https://huggingface.co/cobrayyxx). # Acknowledgments Huge thanks to [Yasmin Moslem](https://huggingface.co/ymoslem) for her supervision, and [Habibullah Akbar](https://huggingface.co/ChavyvAkvar) the founder of Kreasof-AI, for his leadership and support.
MrRobotoAI/108L
MrRobotoAI
2025-05-04T10:39:24Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "base_model:Blackroot/Llama-3-LongStory-LORA", "base_model:merge:Blackroot/Llama-3-LongStory-LORA", "base_model:MrRobotoAI/A14", "base_model:merge:MrRobotoAI/A14", "base_model:MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K", "base_model:merge:MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K", "base_model:MrRobotoAI/Nord-8b-Uncensored-BASE-128k", "base_model:merge:MrRobotoAI/Nord-8b-Uncensored-BASE-128k", "base_model:MrRobotoAI/Odin-v2-8b-NOVELIST-128K", "base_model:merge:MrRobotoAI/Odin-v2-8b-NOVELIST-128K", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T08:54:23Z
--- base_model: - MrRobotoAI/Odin-v2-8b-NOVELIST-128K - MrRobotoAI/Odin-v2-8b-NOVELIST-128K - MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K - MrRobotoAI/A14 - MrRobotoAI/Nord-8b-Uncensored-BASE-128k - Blackroot/Llama-3-LongStory-LORA library_name: transformers tags: - mergekit - merge --- # merge 13,281 LINES This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/Odin-v2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2-8b-NOVELIST-128K) as a base. ### Models Merged The following models were included in the merge: * [MrRobotoAI/Odin-v2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2-8b-NOVELIST-128K) + [MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K](https://huggingface.co/MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K) * [MrRobotoAI/A14](https://huggingface.co/MrRobotoAI/A14) * [MrRobotoAI/Nord-8b-Uncensored-BASE-128k](https://huggingface.co/MrRobotoAI/Nord-8b-Uncensored-BASE-128k) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: task_arithmetic models: - model: MrRobotoAI/A14 parameters: weight: - filter: v_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: o_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: up_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: gate_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: down_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - value: 2 - model: MrRobotoAI/Nord-8b-Uncensored-BASE-128k+Blackroot/Llama-3-LongStory-LORA parameters: weight: - filter: v_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: o_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: up_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: gate_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: down_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - value: 1 - model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K+MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K parameters: weight: - filter: v_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: o_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: up_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: gate_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: down_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - value: 0 base_model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K dtype: bfloat16 ```
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_emotion_outcome_on_proxy_merged_0_1_MC
gradientrouting-spar
2025-05-04T10:37:40Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-04T10:37:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
daozo/blip-flickr8k
daozo
2025-05-04T10:36:19Z
0
0
transformers
[ "transformers", "safetensors", "blip", "image-text-to-text", "generated_from_trainer", "base_model:Salesforce/blip-image-captioning-base", "base_model:finetune:Salesforce/blip-image-captioning-base", "license:bsd-3-clause", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-05-04T10:35:50Z
--- library_name: transformers license: bsd-3-clause base_model: Salesforce/blip-image-captioning-base tags: - generated_from_trainer model-index: - name: blip-flickr8k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # blip-flickr8k This model is a fine-tuned version of [Salesforce/blip-image-captioning-base](https://huggingface.co/Salesforce/blip-image-captioning-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2118 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.5.1+cu124 - Datasets 3.5.0 - Tokenizers 0.21.0
deswaq/iuh9
deswaq
2025-05-04T10:32:16Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T10:19:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Juicesyo/lora_TTS
Juicesyo
2025-05-04T10:31:07Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T10:28:13Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kavanmevada/doc-sidc-gemma-3-1b-finetune
kavanmevada
2025-05-04T10:31:04Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-1b-it", "base_model:finetune:unsloth/gemma-3-1b-it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T10:29:14Z
--- base_model: unsloth/gemma-3-1b-it tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** kavanmevada - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gf43hhd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe
gf43hhd
2025-05-04T10:30:26Z
15
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am armored zealous giraffe", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-19T21:04:10Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am armored zealous giraffe - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="gf43hhd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
laampt/lecun-showcase
laampt
2025-05-04T10:30:05Z
0
0
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
2025-05-04T10:29:13Z
--- license: apache-2.0 ---
dgambettaphd/M_llm2_gen8_WXS_doc1000_synt64_lr1e-04_acm_FRESH
dgambettaphd
2025-05-04T10:27:12Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-04T10:27:01Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fats-fme/e3619639-6195-49f6-855b-3de787be8fd2
fats-fme
2025-05-04T10:24:49Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-7B-Instruct", "base_model:adapter:unsloth/Qwen2-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-05-04T08:39:48Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: e3619639-6195-49f6-855b-3de787be8fd2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-7B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 2cea3f4474f4b631_train_data.json ds_type: json format: custom path: /workspace/input_data/2cea3f4474f4b631_train_data.json type: field_instruction: en field_output: ja format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto early_stopping_patience: 3 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: true group_by_length: false hub_model_id: fats-fme/e3619639-6195-49f6-855b-3de787be8fd2 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lora_target_modules: - q_proj - v_proj lr_scheduler: cosine max_memory: 0: 130GB max_steps: 50 micro_batch_size: 1 mlflow_experiment_name: /tmp/2cea3f4474f4b631_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f815c2b3-885a-4dd2-b5ba-ec8094ae5ef3 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: f815c2b3-885a-4dd2-b5ba-ec8094ae5ef3 warmup_steps: 200 weight_decay: 0.01 xformers_attention: null ``` </details><br> # e3619639-6195-49f6-855b-3de787be8fd2 This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 200 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | 5.7858 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
pretzelprtty/LLMAdev
pretzelprtty
2025-05-04T10:24:39Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-05-04T10:24:39Z
--- license: bigscience-openrail-m ---
delightalien/flaskDB
delightalien
2025-05-04T10:15:15Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T10:15:15Z
--- license: apache-2.0 ---
Dag1233/Daghas
Dag1233
2025-05-04T10:15:08Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T10:15:08Z
--- license: apache-2.0 ---
harshroxnox/Mistral-7B-Instruct-v0.3-Q5_K_M-GGUF
harshroxnox
2025-05-04T10:12:52Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:quantized:mistralai/Mistral-7B-Instruct-v0.3", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-04T10:12:31Z
--- base_model: mistralai/Mistral-7B-Instruct-v0.3 license: apache-2.0 tags: - llama-cpp - gguf-my-repo extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. --- # harshroxnox/Mistral-7B-Instruct-v0.3-Q5_K_M-GGUF This model was converted to GGUF format from [`mistralai/Mistral-7B-Instruct-v0.3`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo harshroxnox/Mistral-7B-Instruct-v0.3-Q5_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo harshroxnox/Mistral-7B-Instruct-v0.3-Q5_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo harshroxnox/Mistral-7B-Instruct-v0.3-Q5_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo harshroxnox/Mistral-7B-Instruct-v0.3-Q5_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q5_k_m.gguf -c 2048 ```
robinfaro/StandardMoE-1B-fineweb_edu-90BT
robinfaro
2025-05-04T10:10:16Z
1
0
null
[ "safetensors", "moegpt", "model_hub_mixin", "pytorch_model_hub_mixin", "custom_code", "region:us" ]
null
2025-04-28T07:22:35Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
robinfaro/TiMoE-1B-fineweb_edu-60BT
robinfaro
2025-05-04T10:10:07Z
4
0
null
[ "safetensors", "moegpt", "model_hub_mixin", "pytorch_model_hub_mixin", "custom_code", "region:us" ]
null
2025-04-29T07:50:46Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
boogiey/0xmodel1080
boogiey
2025-05-04T10:03:58Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T10:03:58Z
--- license: apache-2.0 ---
John6666/momoiro-illustrious-v12-sdxl
John6666
2025-05-04T10:03:55Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "girls", "cute", "colors", "Illustrious XL v2.0", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-XL-v2.0", "base_model:finetune:OnomaAIResearch/Illustrious-XL-v2.0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-05-04T09:57:59Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - girls - cute - colors - Illustrious XL v2.0 - illustrious base_model: OnomaAIResearch/Illustrious-XL-v2.0 --- Original model is [here](https://civitai.com/models/1534695/momoiroillustrious?modelVersionId=1743685). This model created by [oritatami_neko](https://civitai.com/user/oritatami_neko).
memevis/walk22
memevis
2025-05-04T10:03:36Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T10:03:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
certty123/net
certty123
2025-05-04T10:01:37Z
0
0
null
[ "license:bsd-3-clause", "region:us" ]
null
2025-05-04T10:01:37Z
--- license: bsd-3-clause ---
mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-S-GGUF
mradermacher
2025-05-04T10:00:11Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-S", "base_model:quantized:Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-S", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-04T08:50:48Z
--- base_model: Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-S language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-S <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-S-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-S-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-S.Q2_K.gguf) | Q2_K | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-S-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-S.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-S-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-S.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-S-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-S.Q3_K_L.gguf) | Q3_K_L | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-S-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-S.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-S-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-S.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-S-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-S.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-S-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-S.Q5_K_S.gguf) | Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-S-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-S.Q5_K_M.gguf) | Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-S-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-S.Q6_K.gguf) | Q6_K | 12.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-S-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-S.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
GHDTJH/fshdf
GHDTJH
2025-05-04T09:57:12Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-04T09:57:11Z
--- license: creativeml-openrail-m ---
Balustrade/MovieGPT_v2
Balustrade
2025-05-04T09:55:26Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T09:45:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
WangBiao/R1-Track-GRPO
WangBiao
2025-05-04T09:54:33Z
7
0
null
[ "safetensors", "qwen2_5_vl", "dataset:WangBiao/R1-Track-5k", "base_model:Qwen/Qwen2.5-VL-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct", "license:mit", "region:us" ]
null
2025-04-27T15:32:24Z
--- license: mit datasets: - WangBiao/R1-Track-5k base_model: - Qwen/Qwen2.5-VL-3B-Instruct --- # Demo ```python from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor from qwen_vl_utils import process_vision_info model = Qwen2_5_VLForConditionalGeneration.from_pretrained( "WangBiao/R1-Track-GRPO", torch_dtype="auto", device_map="auto" ) min_pixels = 336*336 max_pixels = 336*336 processor = AutoProcessor.from_pretrained("WangBiao/R1-Track-GRPO", min_pixels=min_pixels, max_pixels=max_pixels) messages = [ { "role": "system", "content": "You are a helpful assistant.", }, { "role": "user", "content": [ { "type": "image", "image": "image_1.jpg", }, { "type": "image", "image": "image_2.jpg", }, {"type": "text", "text": "You FIRST think about the reasoning process as an internal monologue and then provide the final answer. \n The reasoning process MUST BE enclosed within <think> </think> tags. The final answer MUST BE put in <answer> </answer> tags.Please identify the target specified by the bounding box [241,66,329,154] in the first image and locate it in the second image. Return the coordinates in [x_min,y_min,x_max,y_max] format."}, ], } ] text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to(model.device) generated_ids = model.generate(**inputs, max_new_tokens=256) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ```
maddi99/uiu_bn_new_9
maddi99
2025-05-04T09:54:13Z
0
0
null
[ "safetensors", "whisper", "generated_from_trainer", "bn", "dataset:maddi99/merged_audio_dataset_1", "dataset:maddi99/uiu_faculty_new_3", "dataset:maddi99/uiu_new_2", "base_model:maddi99/uiu_bn_new_8", "base_model:finetune:maddi99/uiu_bn_new_8", "region:us" ]
null
2025-05-04T09:50:45Z
--- language: - bn base_model: maddi99/uiu_bn_new_8 tags: - generated_from_trainer datasets: - maddi99/merged_audio_dataset_1 - maddi99/uiu_faculty_new_3 - maddi99/uiu_new_2 model-index: - name: Whisper Medium - maddi results: [] --- # Whisper Medium - maddi This model is a fine-tuned version of maddi99/uiu_bn_new_8 on the uiu transcription dataset. It achieves the following results on the evaluation set: * Loss: 1.2438 * Cer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: * learning_rate: 1e-05 * train_batch_size: 3 * eval_batch_size: 3 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr_scheduler_type: linear * lr_scheduler_warmup_steps: 50 * training_steps: 9000 * mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:--------------:|:---:| | 0.1243 | 0.32 | 500 | 1.2236 | 1.0 | | 0.1035 | 0.64 | 1000 | 1.1430 | 1.0 | | 0.1021 | 0.95 | 1500 | 1.2660 | 1.0 | | 0.071 | 1.27 | 2000 | 1.2467 | 1.0 | | 0.0687 | 1.59 | 2500 | 1.2743 | 1.0 | | 0.0731 | 1.91 | 3000 | 1.1871 | 1.0 | | 0.0567 | 2.23 | 3500 | 1.2544 | 1.0 | | 0.0524 | 2.55 | 4000 | 1.2703 | 1.0 | | 0.0558 | 2.86 | 4500 | 1.2835 | 1.0 | | 0.0465 | 3.18 | 5000 | 1.2725 | 1.0 | | 0.0389 | 3.5 | 5500 | 1.2073 | 1.0 | | 0.0456 | 3.82 | 6000 | 1.2279 | 1.0 | | 0.0392 | 4.14 | 6500 | 1.2560 | 1.0 | | 0.0351 | 4.46 | 7000 | 1.2486 | 1.0 | | 0.0357 | 4.77 | 7500 | 1.2509 | 1.0 | | 0.031 | 5.09 | 8000 | 1.2612 | 1.0 | | 0.0285 | 5.41 | 8500 | 1.2463 | 1.0 | | 0.029 | 5.73 | 9000 | 1.2438 | 1.0 | ## Framework versions * Transformers 4.36.0 * Pytorch 2.4.0 * Datasets 2.21.0 * Tokenizers 0.15.2
bodam/Llama-3.2-1B-ko_wiki-4bit-wikiqa
bodam
2025-05-04T09:54:13Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-04T09:52:09Z
--- base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** bodam - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
robinfaro/TiMoE-1B-fineweb_edu-80BT
robinfaro
2025-05-04T09:54:00Z
0
0
null
[ "safetensors", "moegpt", "model_hub_mixin", "pytorch_model_hub_mixin", "custom_code", "region:us" ]
null
2025-05-04T09:51:38Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
hanaearg/emo-QwenDev15
hanaearg
2025-05-04T09:52:58Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-04T09:52:50Z
--- base_model: unsloth/qwen2-7b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** hanaearg - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2-7b-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
deswaq/iuh8
deswaq
2025-05-04T09:46:11Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T09:43:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MrRobotoAI/106B
MrRobotoAI
2025-05-04T09:45:17Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "base_model:Blackroot/Llama-3-LongStory-LORA", "base_model:merge:Blackroot/Llama-3-LongStory-LORA", "base_model:MrRobotoAI/A9", "base_model:merge:MrRobotoAI/A9", "base_model:MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K", "base_model:merge:MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K", "base_model:MrRobotoAI/Nord-8b-Uncensored-BASE-128k", "base_model:merge:MrRobotoAI/Nord-8b-Uncensored-BASE-128k", "base_model:MrRobotoAI/Odin-v2-8b-NOVELIST-128K", "base_model:merge:MrRobotoAI/Odin-v2-8b-NOVELIST-128K", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T01:59:36Z
--- base_model: - MrRobotoAI/Odin-v2-8b-NOVELIST-128K - MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K - MrRobotoAI/A9 - MrRobotoAI/Odin-v2-8b-NOVELIST-128K - MrRobotoAI/Nord-8b-Uncensored-BASE-128k - Blackroot/Llama-3-LongStory-LORA library_name: transformers tags: - mergekit - merge --- # merge 13,825 CHAPTERS This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/Odin-v2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2-8b-NOVELIST-128K) as a base. ### Models Merged The following models were included in the merge: * [MrRobotoAI/Odin-v2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2-8b-NOVELIST-128K) + [MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K](https://huggingface.co/MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K) * [MrRobotoAI/A9](https://huggingface.co/MrRobotoAI/A9) * [MrRobotoAI/Nord-8b-Uncensored-BASE-128k](https://huggingface.co/MrRobotoAI/Nord-8b-Uncensored-BASE-128k) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: task_arithmetic models: - model: MrRobotoAI/A9 parameters: weight: - filter: v_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: o_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: up_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: gate_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: down_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - value: 2 - model: MrRobotoAI/Nord-8b-Uncensored-BASE-128k+Blackroot/Llama-3-LongStory-LORA parameters: weight: - filter: v_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: o_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: up_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: gate_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: down_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - value: 1 - model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K+MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K parameters: weight: - filter: v_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: o_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: up_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: gate_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: down_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - value: 0 base_model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K dtype: bfloat16 ```
dsfwsfr/dreem
dsfwsfr
2025-05-04T09:39:33Z
0
1
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-05-04T09:39:33Z
--- license: bigscience-openrail-m ---
wqrqwre/werrwer
wqrqwre
2025-05-04T09:37:38Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T09:37:38Z
--- license: apache-2.0 ---
se7eneth/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_unseen_chinchilla
se7eneth
2025-05-04T09:36:20Z
22
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am lightfooted unseen chinchilla", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-07T17:23:35Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_unseen_chinchilla tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am lightfooted unseen chinchilla - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_unseen_chinchilla This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="se7eneth/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_unseen_chinchilla", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MrRobotoAI/105R
MrRobotoAI
2025-05-04T09:35:37Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "base_model:Blackroot/Llama-3-LongStory-LORA", "base_model:merge:Blackroot/Llama-3-LongStory-LORA", "base_model:MrRobotoAI/A7", "base_model:merge:MrRobotoAI/A7", "base_model:MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K", "base_model:merge:MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K", "base_model:MrRobotoAI/Nord-8b-Uncensored-BASE-128k", "base_model:merge:MrRobotoAI/Nord-8b-Uncensored-BASE-128k", "base_model:MrRobotoAI/Odin-v2-8b-NOVELIST-128K", "base_model:merge:MrRobotoAI/Odin-v2-8b-NOVELIST-128K", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T01:42:19Z
--- base_model: - MrRobotoAI/Odin-v2-8b-NOVELIST-128K - MrRobotoAI/Nord-8b-Uncensored-BASE-128k - Blackroot/Llama-3-LongStory-LORA - MrRobotoAI/Odin-v2-8b-NOVELIST-128K - MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K - MrRobotoAI/A7 library_name: transformers tags: - mergekit - merge --- # merge 13,794 R This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/Odin-v2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2-8b-NOVELIST-128K) as a base. ### Models Merged The following models were included in the merge: * [MrRobotoAI/Nord-8b-Uncensored-BASE-128k](https://huggingface.co/MrRobotoAI/Nord-8b-Uncensored-BASE-128k) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA) * [MrRobotoAI/Odin-v2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2-8b-NOVELIST-128K) + [MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K](https://huggingface.co/MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K) * [MrRobotoAI/A7](https://huggingface.co/MrRobotoAI/A7) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: task_arithmetic models: - model: MrRobotoAI/A7 parameters: weight: - filter: v_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: o_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: up_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: gate_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: down_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - value: 2 - model: MrRobotoAI/Nord-8b-Uncensored-BASE-128k+Blackroot/Llama-3-LongStory-LORA parameters: weight: - filter: v_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: o_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: up_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: gate_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: down_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - value: 1 - model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K+MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K parameters: weight: - filter: v_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: o_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: up_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: gate_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: down_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - value: 0 base_model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K dtype: bfloat16 ```
cvoffer/4265e64c-e0c3-4d90-bc04-95b2f895aa01
cvoffer
2025-05-04T09:35:06Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-1B-Instruct", "base_model:adapter:unsloth/Llama-3.2-1B-Instruct", "license:llama3.2", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-04T09:28:18Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-1B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 4265e64c-e0c3-4d90-bc04-95b2f895aa01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/Llama-3.2-1B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - a2914c06a7126786_train_data.json ds_type: json format: custom path: /workspace/input_data/a2914c06a7126786_train_data.json type: field_instruction: context field_output: outcome format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: cvoffer/4265e64c-e0c3-4d90-bc04-95b2f895aa01 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 10 mixed_precision: bf16 mlflow_experiment_name: /tmp/a2914c06a7126786_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 649baec9-d960-49fd-a593-a3b8bbfbb01e wandb_project: s56-28 wandb_run: your_name wandb_runid: 649baec9-d960-49fd-a593-a3b8bbfbb01e warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 4265e64c-e0c3-4d90-bc04-95b2f895aa01 This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.3696 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.7915 | 0.0910 | 150 | 4.3696 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ShabanEjupi/Chatbot-Phi1.5-4bit
ShabanEjupi
2025-05-04T09:34:10Z
0
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-04T09:32:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
irehmaan/ppo-LunarLander-v2
irehmaan
2025-05-04T09:32:35Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-05-04T09:30:04Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 253.45 +/- 14.79 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ElnaggarLab/ankh2-ext1
ElnaggarLab
2025-05-04T09:31:32Z
40
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "biology", "protein", "protein language model", "protein embedding", "dataset:agemagician/uniref50", "arxiv:2301.06568", "doi:10.57967/hf/5339", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-07-07T09:32:58Z
--- license: cc-by-nc-sa-4.0 tags: - biology - protein - protein language model - protein embedding datasets: - agemagician/uniref50 --- # ANKH2-extended1 model Pretrained model on protein sequences using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/2301.06568) and first released in [this repository](https://github.com/agemagician/Ankh). This model is trained on uppercase amino acids: it only works with capital letter amino acids. ## Model description Ankh2-ext1 is based on the `ANKH-Large` model and was pretrained on a large corpus of protein sequences in a self-supervised fashion. This means it was pretrained on the raw protein sequences only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those protein sequences. Two important differences between this ANKH2-Large model and the original ANKH-Large version are: 1. The model was trained with more number of epochs. 2. The activation function changed to silu. It has been shown that the features extracted from this self-supervised model (LM-embeddings) captured important biophysical properties governing protein shape. shape. This implied learning some of the grammar of the language of life realized in protein sequences. ## Intended uses & limitations The model could be used for protein feature extraction or to be fine-tuned on downstream tasks. We have noticed in some tasks you can gain more accuracy by fine-tuning the model using lora method rather than using it as a feature extractor. We have also noticed that for feature extraction, its better to use the feature extracted from the encoder rather than from the decoder. ### How to use Here is how to use this model to extract the features of a given protein sequence in PyTorch: ```python sequence_examples = ["PRTEINO", "SEQWENCE"] # tokenize sequences and pad up to the longest sequence in the batch ids = tokenizer.batch_encode_plus(sequence_examples, add_special_tokens=True, padding="longest") input_ids = torch.tensor(ids['input_ids']).to(device) attention_mask = torch.tensor(ids['attention_mask']).to(device) # generate embeddings with torch.no_grad(): embedding_repr = model(input_ids=input_ids,attention_mask=attention_mask) # extract embeddings for the first ([0,:]) sequence in the batch while removing padded & special tokens ([0,:7]) emb_0 = embedding_repr.last_hidden_state[0,:7] # shape (7 x 1536) print(f"Shape of per-residue embedding of first sequences: {emb_0.shape}") # do the same for the second ([1,:]) sequence in the batch while taking into account different sequence lengths ([1,:8]) emb_1 = embedding_repr.last_hidden_state[1,:8] # shape (8 x 1536) # if you want to derive a single representation (per-protein embedding) for the whole protein emb_0_per_protein = emb_0.mean(dim=0) # shape (1536) print(f"Shape of per-protein embedding of first sequences: {emb_0_per_protein.shape}") ``` ## Training data The ANKH2-Large model was pretrained on [UniRef50](https://www.uniprot.org/help/uniref), a dataset consisting of 60 million protein sequences. ## Training procedure ### Preprocessing The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 25. The inputs of the model are then of the form: ``` Protein Sequence </s> ``` The preprocessing step was performed on the fly, by cutting and padding the protein sequences up to 512 tokens. The details of the masking procedure for each sequence are as follows: - 20% of the amino acids are masked. - In 100% of the cases, the masked amino acids are replaced by `<extra_id_num>` token, where "num" is a number in range 0 and 115. ### Pretraining The model was trained on a single TPU Pod V5-lite for 45 epochs in total, using sequence length 512 (batch size 1k). It was trained using ANKH-Large model as an initial checkpoint, rather than training from scratch. It has a total of approximately 2B parameters and was trained using the encoder-decoder architecture. The optimizer used is Adafactor with linear warmup with linear decay learning rate schedule for pre-training. ## Evaluation results When the model is used for feature extraction "FE" and parameter efficient fine-tuning "Lora", this model achieves the following results: Test results : | Task/Dataset | Method | secondary structure (3-states) | secondary structure (8-states) | Localization | Membrane | Solubility | Fluorescence | |:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:| | CASP12 | FE | comming soon | comming soon | | | | | | CASP12 | Lora | comming soon | comming soon | | | | | | TS115 | FE | comming soon | comming soon | | | | | | TS115 | Lora | comming soon | comming soon | | | | | | CB513 | FE | comming soon | comming soon | | | | | | CB513 | Lora | comming soon | comming soon | | | | | | DeepLoc | FE | | | comming soon | comming soon | | | DeepLoc | Lora | | | comming soon | comming soon | | | | Solubility | FE | | | | | comming soon | | | Solubility | Lora | | | | | 74% | | | Fluorescence | FE | | | | | | Comming Soon | | Fluorescence | Lora | | | | | | 68% | ### BibTeX entry and citation info ```bibtex @misc{elnaggar_lab_2025, author = { Elnaggar Lab }, title = { ankh2-ext1 (Revision 286cb6e) }, year = 2025, url = { https://huggingface.co/ElnaggarLab/ankh2-ext1 }, doi = { 10.57967/hf/5339 }, publisher = { Hugging Face } } ``` > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
ElnaggarLab/ankh2-ext2
ElnaggarLab
2025-05-04T09:30:22Z
407
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "biology", "protein", "protein language model", "protein embedding", "dataset:agemagician/uniref50", "arxiv:2301.06568", "doi:10.57967/hf/5338", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-07-07T09:45:20Z
--- license: cc-by-nc-sa-4.0 tags: - biology - protein - protein language model - protein embedding datasets: - agemagician/uniref50 --- # ANKH2-extended2 model Pretrained model on protein sequences using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/2301.06568) and first released in [this repository](https://github.com/agemagician/Ankh). This model is trained on uppercase amino acids: it only works with capital letter amino acids. ## Model description Ankh2-ext2 is based on the `ANKH-Large` model and was pretrained on a large corpus of protein sequences in a self-supervised fashion. This means it was pretrained on the raw protein sequences only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those protein sequences. Two important differences between this ANKH2-Large model and the original ANKH-Large version are: 1. The model was trained with more number of epochs. 2. The activation function changed to silu. It has been shown that the features extracted from this self-supervised model (LM-embeddings) captured important biophysical properties governing protein shape. shape. This implied learning some of the grammar of the language of life realized in protein sequences. ## Intended uses & limitations The model could be used for protein feature extraction or to be fine-tuned on downstream tasks. We have noticed in some tasks you can gain more accuracy by fine-tuning the model using lora method rather than using it as a feature extractor. We have also noticed that for feature extraction, its better to use the feature extracted from the encoder rather than from the decoder. ### How to use Here is how to use this model to extract the features of a given protein sequence in PyTorch: ```python sequence_examples = ["PRTEINO", "SEQWENCE"] # tokenize sequences and pad up to the longest sequence in the batch ids = tokenizer.batch_encode_plus(sequence_examples, add_special_tokens=True, padding="longest") input_ids = torch.tensor(ids['input_ids']).to(device) attention_mask = torch.tensor(ids['attention_mask']).to(device) # generate embeddings with torch.no_grad(): embedding_repr = model(input_ids=input_ids,attention_mask=attention_mask) # extract embeddings for the first ([0,:]) sequence in the batch while removing padded & special tokens ([0,:7]) emb_0 = embedding_repr.last_hidden_state[0,:7] # shape (7 x 1536) print(f"Shape of per-residue embedding of first sequences: {emb_0.shape}") # do the same for the second ([1,:]) sequence in the batch while taking into account different sequence lengths ([1,:8]) emb_1 = embedding_repr.last_hidden_state[1,:8] # shape (8 x 1536) # if you want to derive a single representation (per-protein embedding) for the whole protein emb_0_per_protein = emb_0.mean(dim=0) # shape (1536) print(f"Shape of per-protein embedding of first sequences: {emb_0_per_protein.shape}") ``` ## Training data The ANKH2-Large model was pretrained on [UniRef50](https://www.uniprot.org/help/uniref), a dataset consisting of 60 million protein sequences. ## Training procedure ### Preprocessing The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 25. The inputs of the model are then of the form: ``` Protein Sequence </s> ``` The preprocessing step was performed on the fly, by cutting and padding the protein sequences up to 512 tokens. The details of the masking procedure for each sequence are as follows: - 20% of the amino acids are masked. - In 100% of the cases, the masked amino acids are replaced by `<extra_id_num>` token, where "num" is a number in range 0 and 115. ### Pretraining The model was trained on a single TPU Pod V5-lite for 45 epochs in total, using sequence length 512 (batch size 1k). It was trained using ANKH-Large model as an initial checkpoint, rather than training from scratch. It has a total of approximately 2B parameters and was trained using the encoder-decoder architecture. The optimizer used is Adafactor with linear warmup with linear decay learning rate schedule for pre-training. ## Evaluation results When the model is used for feature extraction "FE" and parameter efficient fine-tuning "Lora", this model achieves the following results: Test results : | Task/Dataset | Method | secondary structure (3-states) | secondary structure (8-states) | Localization | Membrane | Solubility | Fluorescence | |:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:| | CASP12 | FE | comming soon | comming soon | | | | | | CASP12 | Lora | comming soon | comming soon | | | | | | TS115 | FE | comming soon | comming soon | | | | | | TS115 | Lora | comming soon | comming soon | | | | | | CB513 | FE | comming soon | comming soon | | | | | | CB513 | Lora | comming soon | comming soon | | | | | | DeepLoc | FE | | | comming soon | comming soon | | | DeepLoc | Lora | | | comming soon | comming soon | | | | Solubility | FE | | | | | comming soon | | | Solubility | Lora | | | | | 74% | | | Fluorescence | FE | | | | | | Comming Soon | | Fluorescence | Lora | | | | | | 68% | ### BibTeX entry and citation info ```bibtex @misc{elnaggar_lab_2025, author = { Elnaggar Lab }, title = { ankh2-ext2 (Revision 4c155ee) }, year = 2025, url = { https://huggingface.co/ElnaggarLab/ankh2-ext2 }, doi = { 10.57967/hf/5338 }, publisher = { Hugging Face } } ``` > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
ashan32/ashanGPU
ashan32
2025-05-04T09:30:21Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T09:30:18Z
--- license: apache-2.0 ---
MrA7A/qwen3-4b-arabic-lora
MrA7A
2025-05-04T09:29:30Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:2309.00071", "base_model:Qwen/Qwen3-4B-Base", "base_model:finetune:Qwen/Qwen3-4B-Base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T06:35:12Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE pipeline_tag: text-generation base_model: - Qwen/Qwen3-4B-Base --- # Qwen3-4B <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Qwen3 Highlights Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios. - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**. ## Model Overview **Qwen3-4B** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 4.0B - Number of Paramaters (Non-Embedding): 3.6B - Number of Layers: 36 - Number of Attention Heads (GQA): 32 for Q and 8 for KV - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts). For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/). > [!TIP] > If you encounter significant endless repetitions, please refer to the [Best Practices](#best-practices) section for optimal sampling parameters, and set the ``presence_penalty`` to 1.5. ## Quickstart The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.51.0`, you will encounter the following error: ``` KeyError: 'qwen3' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-4B" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 (</think>) index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint: - SGLang: ```shell python -m sglang.launch_server --model-path Qwen/Qwen3-4B --reasoning-parser qwen3 ``` - vLLM: ```shell vllm serve Qwen/Qwen3-4B --enable-reasoning --reasoning-parser deepseek_r1 ``` For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3. ## Switching Between Thinking and Non-Thinking Mode > [!TIP] > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM. > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users. ### `enable_thinking=True` By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # True is the default value for enable_thinking ) ``` In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response. > [!NOTE] > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### `enable_thinking=False` We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Setting enable_thinking=False disables thinking mode ) ``` In this mode, the model will not generate any think content and will not include a `<think>...</think>` block. > [!NOTE] > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations. Here is an example of a multi-turn conversation: ```python from transformers import AutoModelForCausalLM, AutoTokenizer class QwenChatbot: def __init__(self, model_name="Qwen/Qwen3-4B"): self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained(model_name) self.history = [] def generate_response(self, user_input): messages = self.history + [{"role": "user", "content": user_input}] text = self.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = self.tokenizer(text, return_tensors="pt") response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist() response = self.tokenizer.decode(response_ids, skip_special_tokens=True) # Update history self.history.append({"role": "user", "content": user_input}) self.history.append({"role": "assistant", "content": response}) return response # Example Usage if __name__ == "__main__": chatbot = QwenChatbot() # First input (without /think or /no_think tags, thinking mode is enabled by default) user_input_1 = "How many r's in strawberries?" print(f"User: {user_input_1}") response_1 = chatbot.generate_response(user_input_1) print(f"Bot: {response_1}") print("----------------------") # Second input with /no_think user_input_2 = "Then, how many r's in blueberries? /no_think" print(f"User: {user_input_2}") response_2 = chatbot.generate_response(user_input_2) print(f"Bot: {response_2}") print("----------------------") # Third input with /think user_input_3 = "Really? /think" print(f"User: {user_input_3}") response_3 = chatbot.generate_response(user_input_3) print(f"Bot: {response_3}") ``` > [!NOTE] > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled. > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block. ## Agentic Use Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. ```python from qwen_agent.agents import Assistant # Define LLM llm_cfg = { 'model': 'Qwen3-4B', # Use the endpoint provided by Alibaba Model Studio: # 'model_type': 'qwen_dashscope', # 'api_key': os.getenv('DASHSCOPE_API_KEY'), # Use a custom endpoint compatible with OpenAI API: 'model_server': 'http://localhost:8000/v1', # api_base 'api_key': 'EMPTY', # Other parameters: # 'generate_cfg': { # # Add: When the response content is `<think>this is the thought</think>this is the answer; # # Do not add: When the response has been separated by reasoning_content and content. # 'thought_in_content': True, # }, } # Define Tools tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ] # Define Agent bot = Assistant(llm=llm_cfg, function_list=tools) # Streaming generation messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses) ``` ## Processing Long Texts Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method. YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks: - Modifying the model files: In the `config.json` file, add the `rope_scaling` fields: ```json { ..., "rope_scaling": { "rope_type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768 } } ``` For `llama.cpp`, you need to regenerate the GGUF file after the modification. - Passing command line arguments: For `vllm`, you can use ```shell vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072 ``` For `sglang`, you can use ```shell python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}' ``` For `llama-server` from `llama.cpp`, you can use ```shell llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768 ``` > [!IMPORTANT] > If you encounter the following warning > ``` > Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'} > ``` > please upgrade `transformers>=4.51.0`. > [!NOTE] > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.** > We advise adding the `rope_scaling` configuration only when processing long contexts is required. > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0. > [!NOTE] > The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance. > [!TIP] > The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed. ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance. 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance. 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed. ### Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen3, title = {Qwen3}, url = {https://qwenlm.github.io/blog/qwen3/}, author = {Qwen Team}, month = {April}, year = {2025} } ```
infogep/da53d1d6-7366-4c7f-9e01-e93e84116ac6
infogep
2025-05-04T09:26:57Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-1B-Instruct", "base_model:adapter:unsloth/Llama-3.2-1B-Instruct", "license:llama3.2", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-04T09:24:13Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-1B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: da53d1d6-7366-4c7f-9e01-e93e84116ac6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/Llama-3.2-1B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - a2914c06a7126786_train_data.json ds_type: json format: custom path: /workspace/input_data/a2914c06a7126786_train_data.json type: field_instruction: context field_output: outcome format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: infogep/da53d1d6-7366-4c7f-9e01-e93e84116ac6 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/a2914c06a7126786_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 649baec9-d960-49fd-a593-a3b8bbfbb01e wandb_project: s56-7 wandb_run: your_name wandb_runid: 649baec9-d960-49fd-a593-a3b8bbfbb01e warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # da53d1d6-7366-4c7f-9e01-e93e84116ac6 This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.4348 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.5255 | 0.0729 | 150 | 4.4348 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
joboffer/fec0f8bc-fcfc-492c-8031-4f1bc12646bf
joboffer
2025-05-04T09:25:41Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-1B-Instruct", "base_model:adapter:unsloth/Llama-3.2-1B-Instruct", "license:llama3.2", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-04T09:24:09Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-1B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: fec0f8bc-fcfc-492c-8031-4f1bc12646bf results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.2-1B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - a2914c06a7126786_train_data.json ds_type: json format: custom path: /workspace/input_data/a2914c06a7126786_train_data.json type: field_instruction: context field_output: outcome format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: joboffer/fec0f8bc-fcfc-492c-8031-4f1bc12646bf hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/a2914c06a7126786_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 649baec9-d960-49fd-a593-a3b8bbfbb01e wandb_project: s56-33 wandb_run: your_name wandb_runid: 649baec9-d960-49fd-a593-a3b8bbfbb01e warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # fec0f8bc-fcfc-492c-8031-4f1bc12646bf This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.9052 | 0.0971 | 200 | 2.0773 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
annemiekebickleyoy/265d0557-d4a8-46ab-a8b0-fcf4eb83825a
annemiekebickleyoy
2025-05-04T09:19:17Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "dataset:07712ba8757e90e2_train_data.json", "base_model:unsloth/SmolLM2-1.7B", "base_model:adapter:unsloth/SmolLM2-1.7B", "region:us" ]
null
2025-05-04T08:45:14Z
--- library_name: peft tags: - generated_from_trainer datasets: - 07712ba8757e90e2_train_data.json base_model: unsloth/SmolLM2-1.7B model-index: - name: annemiekebickleyoy/265d0557-d4a8-46ab-a8b0-fcf4eb83825a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # annemiekebickleyoy/265d0557-d4a8-46ab-a8b0-fcf4eb83825a This model was trained from scratch on the /workspace/input_data/07712ba8757e90e2_train_data.json dataset. It achieves the following results on the evaluation set: - Loss: 0.0191 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.5.1+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
xbilek25/whisper-medium-en-cv-6.2
xbilek25
2025-05-04T09:16:27Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "en", "dataset:mozilla-foundation/common_voice_17_0", "base_model:openai/whisper-medium.en", "base_model:finetune:openai/whisper-medium.en", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-03T20:36:20Z
--- library_name: transformers language: - en license: apache-2.0 base_model: openai/whisper-medium.en tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_17_0 metrics: - wer model-index: - name: whisper-medium-en-cv-6.2 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 17.0 type: mozilla-foundation/common_voice_17_0 args: 'config: en, split: test' metrics: - name: Wer type: wer value: 31.659522351500307 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-en-cv-6.2 This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on the Common Voice 17.0 dataset. It achieves the following results on the evaluation set: - Loss: 1.1366 - Wer: 31.6595 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 48 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 750 - training_steps: 7500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 0 | 0 | 2.4185 | 46.5401 | | 0.6822 | 0.1 | 750 | 0.9972 | 36.9871 | | 0.2058 | 1.1 | 1500 | 1.0039 | 48.4997 | | 0.0635 | 2.1 | 2250 | 1.0966 | 42.9884 | | 0.0275 | 3.1 | 3000 | 1.1136 | 35.3950 | | 0.0149 | 4.1 | 3750 | 1.1359 | 33.1598 | | 0.0075 | 5.1 | 4500 | 1.1148 | 37.3546 | | 0.0043 | 6.1 | 5250 | 1.1232 | 33.9865 | | 0.0008 | 7.1 | 6000 | 1.1331 | 35.3644 | | 0.0005 | 8.1 | 6750 | 1.1354 | 31.4452 | | 0.0004 | 9.1 | 7500 | 1.1366 | 31.6595 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
gdfgr45645/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_untamed_cobra
gdfgr45645
2025-05-04T09:16:16Z
9
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am amphibious untamed cobra", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-20T16:34:35Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_untamed_cobra tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am amphibious untamed cobra - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_untamed_cobra This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="gdfgr45645/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_untamed_cobra", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
andreeasora/ro_mbart_medical_summarization
andreeasora
2025-05-04T09:15:35Z
0
0
transformers
[ "transformers", "safetensors", "mbart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-04T09:14:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GFCHJDHCJG/GHJH
GFCHJDHCJG
2025-05-04T09:15:32Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-05-04T09:15:31Z
--- license: bigscience-openrail-m ---
Pongsaky/llama3.2-typhoon2-1b-instruct-test
Pongsaky
2025-05-04T09:14:02Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:scb10x/llama3.2-typhoon2-1b-instruct", "base_model:finetune:scb10x/llama3.2-typhoon2-1b-instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T09:13:36Z
--- base_model: scb10x/llama3.2-typhoon2-1b-instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Pongsaky - **License:** apache-2.0 - **Finetuned from model :** scb10x/llama3.2-typhoon2-1b-instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sdfsdsssF/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-alert_trotting_gazelle
sdfsdsssF
2025-05-04T09:10:26Z
2
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am alert trotting gazelle", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T09:10:05Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-alert_trotting_gazelle tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am alert trotting gazelle - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-alert_trotting_gazelle This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sdfsdsssF/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-alert_trotting_gazelle", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dsfghk76/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vicious_scavenging_grasshopper
dsfghk76
2025-05-04T09:08:31Z
7
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am vicious scavenging grasshopper", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-20T15:30:30Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vicious_scavenging_grasshopper tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am vicious scavenging grasshopper - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vicious_scavenging_grasshopper This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dsfghk76/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vicious_scavenging_grasshopper", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MrRobotoAI/102S
MrRobotoAI
2025-05-04T08:54:12Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "base_model:Blackroot/Llama-3-LongStory-LORA", "base_model:merge:Blackroot/Llama-3-LongStory-LORA", "base_model:MrRobotoAI/A3", "base_model:merge:MrRobotoAI/A3", "base_model:MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K", "base_model:merge:MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K", "base_model:MrRobotoAI/Nord-8b-Uncensored-BASE-128k", "base_model:merge:MrRobotoAI/Nord-8b-Uncensored-BASE-128k", "base_model:MrRobotoAI/Odin-v2-8b-NOVELIST-128K", "base_model:merge:MrRobotoAI/Odin-v2-8b-NOVELIST-128K", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T01:01:46Z
--- base_model: - MrRobotoAI/Nord-8b-Uncensored-BASE-128k - Blackroot/Llama-3-LongStory-LORA - MrRobotoAI/Odin-v2-8b-NOVELIST-128K - MrRobotoAI/Odin-v2-8b-NOVELIST-128K - MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K - MrRobotoAI/A3 library_name: transformers tags: - mergekit - merge --- # merge 11,139 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/Odin-v2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2-8b-NOVELIST-128K) as a base. ### Models Merged The following models were included in the merge: * [MrRobotoAI/Nord-8b-Uncensored-BASE-128k](https://huggingface.co/MrRobotoAI/Nord-8b-Uncensored-BASE-128k) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA) * [MrRobotoAI/Odin-v2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2-8b-NOVELIST-128K) + [MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K](https://huggingface.co/MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K) * [MrRobotoAI/A3](https://huggingface.co/MrRobotoAI/A3) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: task_arithmetic models: - model: MrRobotoAI/A3 parameters: weight: - filter: v_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: o_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: up_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: gate_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: down_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - value: 2 - model: MrRobotoAI/Nord-8b-Uncensored-BASE-128k+Blackroot/Llama-3-LongStory-LORA parameters: weight: - filter: v_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: o_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: up_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: gate_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: down_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - value: 1 - model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K+MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K parameters: weight: - filter: v_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: o_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: up_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: gate_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: down_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - value: 0 base_model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K dtype: bfloat16 ```
azservice/TestLogica-Llama-3.2-3B-Instruct
azservice
2025-05-04T08:53:25Z
138
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-20T15:23:58Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** azservice - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
fedovtt/bdcfa455-462c-4a83-bf44-018244324bbf
fedovtt
2025-05-04T08:50:47Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-360M", "base_model:adapter:unsloth/SmolLM2-360M", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-04T08:45:49Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-360M tags: - axolotl - generated_from_trainer model-index: - name: bdcfa455-462c-4a83-bf44-018244324bbf results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/SmolLM2-360M bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - ad53ac34880a775e_train_data.json ds_type: json format: custom path: /workspace/input_data/ad53ac34880a775e_train_data.json type: field_instruction: Q field_output: A format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: fedovtt/bdcfa455-462c-4a83-bf44-018244324bbf hub_repo: null hub_strategy: end hub_token: null learning_rate: 3.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 10 mixed_precision: bf16 mlflow_experiment_name: /tmp/ad53ac34880a775e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d5438032-e7ea-460b-9173-4766d4ba879d wandb_project: s56-28 wandb_run: your_name wandb_runid: d5438032-e7ea-460b-9173-4766d4ba879d warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # bdcfa455-462c-4a83-bf44-018244324bbf This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8326 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0406 | 0.0530 | 150 | 1.8326 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
rudra-sol/Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-omnivorous_graceful_badger
rudra-sol
2025-05-04T08:47:53Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am omnivorous graceful badger", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-72B-Instruct-bnb-4bit", "base_model:finetune:Gensyn/Qwen2.5-72B-Instruct-bnb-4bit", "endpoints_compatible", "region:us" ]
null
2025-05-04T01:49:41Z
--- base_model: Gensyn/Qwen2.5-72B-Instruct-bnb-4bit library_name: transformers model_name: Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-omnivorous_graceful_badger tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am omnivorous graceful badger - unsloth - trl licence: license --- # Model Card for Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-omnivorous_graceful_badger This model is a fine-tuned version of [Gensyn/Qwen2.5-72B-Instruct-bnb-4bit](https://huggingface.co/Gensyn/Qwen2.5-72B-Instruct-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="rudra-sol/Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-omnivorous_graceful_badger", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ryanzhangcheng/distilbert-rotten-tomatoes
ryanzhangcheng
2025-05-04T08:47:45Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-04T08:37:50Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-rotten-tomatoes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-rotten-tomatoes This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0 - Tokenizers 0.21.1
Mostafa8Mehrabi/llama-1b-pruned-3blocks-taylor-therapy-calibration-v1
Mostafa8Mehrabi
2025-05-04T08:44:17Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T08:42:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
khairi/Llama-3.2-1B-Instruct
khairi
2025-05-04T08:35:55Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-01T15:18:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Seansda06/Regina
Seansda06
2025-05-04T08:32:19Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T08:32:19Z
--- license: apache-2.0 ---