modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-23 18:27:52
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
492 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-23 18:25:26
card
stringlengths
11
1.01M
shovit/medbot-llama-3.2-3B
shovit
2025-04-26T04:20:35Z
0
1
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/Llama-3.2-3B-Instruct", "base_model:quantized:unsloth/Llama-3.2-3B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-26T03:36:51Z
--- base_model: unsloth/Llama-3.2-3B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** shovit - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
rusty0403/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_bold_duck
rusty0403
2025-04-26T04:17:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am docile bold duck", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-24T09:08:29Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_bold_duck tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am docile bold duck - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_bold_duck This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="rusty0403/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_bold_duck", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
shuvo97/gemma-3-finetune
shuvo97
2025-04-26T04:13:10Z
0
1
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:adapter:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "region:us" ]
null
2025-04-26T03:57:37Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
SeprotHub/ESM-1b-650M
SeprotHub
2025-04-26T03:49:49Z
0
0
null
[ "pytorch", "tf", "safetensors", "esm", "arxiv:1907.11692", "arxiv:1810.04805", "arxiv:1603.05027", "license:mit", "region:us" ]
null
2025-04-24T15:40:41Z
--- license: mit widget: - text: "MQIFVKTLTGKTITLEVEPS<mask>TIENVKAKIQDKEGIPPDQQRLIFAGKQLEDGRTLSDYNIQKESTLHLVLRLRGG" --- # **ESM-1b** ESM-1b ([paper](https://www.pnas.org/content/118/15/e2016239118#:~:text=https%3A//doi.org/10.1073/pnas.2016239118), [repository](https://github.com/facebookresearch/esm)) is a transformer protein language model, trained on protein sequence data without label supervision. The model is pretrained on Uniref50 with an unsupervised masked language modeling (MLM) objective, meaning the model is trained to predict amino acids from the surrounding sequence context. This pretraining objective allows ESM-1b to learn generally useful features which can be transferred to downstream prediction tasks. ESM-1b has been evaluated on a variety of tasks related to protein structure and function, including remote homology detection, secondary structure prediction, contact prediction, and prediction of the effects of mutations on function, producing state-of-the-art results. **Important note**: ESM-2 is now available in a range of checkpoint sizes. For most tasks, ESM-2 performance will be superior to ESM-1 and ESM-1b, and so we recommend using it instead unless your goal is explicitly to compare against ESM-1b. The ESM-2 checkpoint closest in size to ESM-1b is [esm2_t33_650M_UR50D](https://huggingface.co/facebook/esm2_t33_650M_UR50D). ## **Model description** The ESM-1b model is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) architecture and training procedure, using the Uniref50 2018_03 database of protein sequences. Note that the pretraining is on the raw protein sequences only. The training is purely unsupervised -- during training no labels are given related to structure or function. Training is with the masked language modeling objective. The masking follows the procedure of [Devlin et al. 2019](https://arxiv.org/abs/1810.04805), randomly masking 15% of the amino acids in the input, and includes the pass-through and random token noise. One architecture difference from the RoBERTa model is that ESM-1b uses [pre-activation layer normalization](https://arxiv.org/abs/1603.05027). The learned representations can be used as features for downstream tasks. For example if you have a dataset of measurements of protein activity you can fit a regression model on the features output by ESM-1b to predict the activity of new sequences. The model can also be fine-tuned. ESM-1b can infer information about the structure and function of proteins without further supervision, i.e. it is capable of zero-shot transfer to structure and function prediction. [Rao et al. 2020](https://openreview.net/pdf?id=fylclEqgvgd) found that the attention heads of ESM-1b directly represent contacts in the 3d structure of the protein. [Meier et al. 2021](https://openreview.net/pdf?id=uXc42E9ZPFs) found that ESM-1b can be used to score the effect of sequence variations on protein function. ## **Intended uses & limitations** The model can be used for feature extraction, fine-tuned on downstream tasks, or used directly to make inferences about the structure and function of protein sequences, like any other masked language model. For full examples, please see [our notebook on fine-tuning protein models](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) ## **Training data** The ESM-1b model was pretrained on [Uniref50](https://www.uniprot.org/downloads) 2018-03, a dataset consisting of approximately 30 million protein sequences. ## **Training procedure** ### **Preprocessing** The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 21. The inputs of the model are then of the form: ``` <cls> Protein Sequence A ``` During training, sequences longer than 1023 tokens (without CLS) are randomly cropped to a length of 1023. The details of the masking procedure for each sequence follow Devlin et al. 2019: * 15% of the amino acids are masked. * In 80% of the cases, the masked amino acids are replaced by `<mask>`. * In 10% of the cases, the masked amino acids are replaced by a random amino acid (different) from the one they replace. * In the 10% remaining cases, the masked amino acids are left as is. ### **Pretraining** The model was trained on 128 NVIDIA v100 GPUs for 500K updates, using sequence length 1024 (131,072 tokens per batch). The optimizer used is Adam (betas=[0.9, 0.999]) with a learning rate of 1e-4, a weight decay of 0, learning rate warmup for 16k steps and inverse square root decay of the learning rate after.
annagoncalves2/chatbot-Llama-3.1-8B-unsloth-bnb-4bit-V2
annagoncalves2
2025-04-26T03:33:14Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/Llama-3.1-8B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Llama-3.1-8B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-26T03:32:27Z
--- base_model: unsloth/Llama-3.1-8B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** annagoncalves2 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.1-8B-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Xuehai/cluster_vsr_add_grounded_thinking_single_turn_think_rethink
Xuehai
2025-04-26T02:27:44Z
0
0
transformers
[ "transformers", "qwen2_5_vl", "image-text-to-text", "generated_from_trainer", "trl", "grpo", "conversational", "dataset:rr", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-VL-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-04-25T22:29:32Z
--- base_model: Qwen/Qwen2.5-VL-3B-Instruct datasets: rr library_name: transformers model_name: cluster_vsr_add_grounded_thinking_single_turn_think_rethink tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for cluster_vsr_add_grounded_thinking_single_turn_think_rethink This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) on the [rr](https://huggingface.co/datasets/rr) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Xuehai/cluster_vsr_add_grounded_thinking_single_turn_think_rethink", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/xuehai/cluster_vsr_add_grounded_thinking_single_turn_think_rethink/runs/7254380882.14125-50dea8d4-481b-4f8d-9396-0f6a85878326) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.50.0.dev0 - Pytorch: 2.4.0+cu121 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MergeBench-gemma-2-9b/gemma-2-9b_aya_2epoch
MergeBench-gemma-2-9b
2025-04-26T02:19:04Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-26T02:16:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DavieLion/output_iter2_ckpt_temperature
DavieLion
2025-04-26T02:18:34Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "alignment-handbook", "generated_from_trainer", "conversational", "dataset:new_data_temperature/iter1", "dataset:new_data_temperature/iter2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-26T02:07:11Z
--- library_name: transformers base_model: outputs_temperature/iter1-ckpt tags: - alignment-handbook - generated_from_trainer datasets: - new_data_temperature/iter1 - new_data_temperature/iter2 model-index: - name: iter2-ckpt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # iter2-ckpt This model is a fine-tuned version of [outputs_temperature/iter1-ckpt](https://huggingface.co/outputs_temperature/iter1-ckpt) on the new_data_temperature/iter1 and the new_data_temperature/iter2 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 6.0 ### Training results ### Framework versions - Transformers 4.45.0 - Pytorch 2.1.2+cu121 - Datasets 3.2.0 - Tokenizers 0.20.3
fedovtt/8b59eef1-fc0d-4d48-9868-f5bfd0b245a7
fedovtt
2025-04-26T01:17:46Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:unsloth/mistral-7b-instruct-v0.2", "base_model:adapter:unsloth/mistral-7b-instruct-v0.2", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-04-26T00:57:34Z
--- library_name: peft license: apache-2.0 base_model: unsloth/mistral-7b-instruct-v0.2 tags: - axolotl - generated_from_trainer model-index: - name: 8b59eef1-fc0d-4d48-9868-f5bfd0b245a7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/mistral-7b-instruct-v0.2 bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - a7835b59cc5cb7bc_train_data.json ds_type: json format: custom path: /workspace/input_data/a7835b59cc5cb7bc_train_data.json type: field_input: subset field_instruction: prompt field_output: response_1 format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: fedovtt/8b59eef1-fc0d-4d48-9868-f5bfd0b245a7 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/a7835b59cc5cb7bc_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 333c718d-ff7b-4eed-b296-0c3b9cb63fa4 wandb_project: s56-1 wandb_run: your_name wandb_runid: 333c718d-ff7b-4eed-b296-0c3b9cb63fa4 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 8b59eef1-fc0d-4d48-9868-f5bfd0b245a7 This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8777 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8488 | 0.1791 | 200 | 0.8777 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mdhanif1/hanif
mdhanif1
2025-04-26T00:29:05Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-26T00:29:05Z
--- license: apache-2.0 ---
exala/db_mc2_16.1.1
exala
2025-04-25T23:58:21Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-25T23:58:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
3mily1u/fim-codegen-350m-mono-dpoed-attack-50-1
3mily1u
2025-04-25T23:37:16Z
0
0
transformers
[ "transformers", "safetensors", "codegen", "text-generation", "trl", "dpo", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T23:36:34Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kostiantynk1205/4d4f55be-1511-4439-8ddc-247b70683bde
kostiantynk1205
2025-04-25T23:26:51Z
0
0
peft
[ "peft", "generated_from_trainer", "base_model:microsoft/phi-1_5", "base_model:adapter:microsoft/phi-1_5", "region:us" ]
null
2025-04-25T23:26:29Z
--- library_name: peft tags: - generated_from_trainer base_model: microsoft/phi-1_5 model-index: - name: kostiantynk1205/4d4f55be-1511-4439-8ddc-247b70683bde results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kostiantynk1205/4d4f55be-1511-4439-8ddc-247b70683bde This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8312 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
jerryzh168/phi4-mini-int4wo-hqq
jerryzh168
2025-04-25T23:03:05Z
756
0
transformers
[ "transformers", "pytorch", "phi3", "text-generation", "torchao", "phi", "phi4", "nlp", "code", "math", "chat", "conversational", "custom_code", "multilingual", "base_model:microsoft/Phi-4-mini-instruct", "base_model:quantized:microsoft/Phi-4-mini-instruct", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T04:31:34Z
--- library_name: transformers tags: - torchao - phi - phi4 - nlp - code - math - chat - conversational license: mit language: - multilingual base_model: - microsoft/Phi-4-mini-instruct pipeline_tag: text-generation --- [Phi4-mini](https://huggingface.co/microsoft/Phi-4-mini-instruct) model quantized with [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) int4 weight only quantization, by PyTorch team. # Installation ``` pip install transformers pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126 pip install [email protected]:EleutherAI/lm-evaluation-harness.git pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly ``` # Quantization Recipe We used following code to get the quantized model: ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig model_id = "microsoft/Phi-4-mini-instruct" from torchao.quantization import Int4WeightOnlyConfig quant_config = Int4WeightOnlyConfig(group_size=128) quantization_config = TorchAoConfig(quant_type=quant_config) quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config) tokenizer = AutoTokenizer.from_pretrained(model_id) # Push to hub USER_ID = "YOUR_USER_ID" save_to = f"{USER_ID}/{model_id}-int4wo" quantized_model.push_to_hub(save_to, safe_serialization=False) tokenizer.push_to_hub(save_to) # Manual Testing prompt = "Hey, are you conscious? Can you talk to me?" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") generated_ids = quantized_model.generate(**inputs, max_new_tokens=128) output_text = tokenizer.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) # Local Benchmark import torch.utils.benchmark as benchmark from torchao.utils import benchmark_model import torchao def benchmark_fn(f, *args, **kwargs): # Manual warmup for _ in range(2): f(*args, **kwargs) t0 = benchmark.Timer( stmt="f(*args, **kwargs)", globals={"args": args, "kwargs": kwargs, "f": f}, num_threads=torch.get_num_threads(), ) return f"{(t0.blocked_autorange().mean):.3f}" torchao.quantization.utils.recommended_inductor_config_setter() quantized_model = torch.compile(quantized_model, mode="max-autotune") print(f"{save_to} model:", benchmark_fn(quantized_model.generate, **inputs, max_new_tokens=128)) ``` # Model Quality We rely on [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the quality of the quantized model. ## Installing the nightly version to get most recent updates ``` pip install git+https://github.com/EleutherAI/lm-evaluation-harness ``` ## baseline ``` lm_eval --model hf --model_args pretrained=microsoft/Phi-4-mini-instruct --tasks hellaswag --device cuda:0 --batch_size 8 ``` ## int4wo-hqq ``` lm_eval --model hf --model_args pretrained=jerryzh168/phi4-mini-int4wo-hqq --tasks hellaswag --device cuda:0 --batch_size 8 ``` `TODO: more complete eval results` | Benchmark | | | |----------------------------------|-------------|-------------------| | | Phi-4 mini-Ins | phi4-mini-int4wo | | **Popular aggregated benchmark** | | | | **Reasoning** | | | | HellaSwag | 54.57 | 53.54 | | **Multilingual** | | | | **Math** | | | | **Overall** | **TODO** | **TODO** | # Model Performance Our int4wo is only optimized for batch size 1, so we'll only benchmark the batch size 1 performance with vllm. For batch size N, please see our [gemlite checkpoint](https://huggingface.co/jerryzh168/phi4-mini-int4wo-gemlite). ## Download vllm source code and install vllm ``` git clone [email protected]:vllm-project/vllm.git VLLM_USE_PRECOMPILED=1 pip install . ``` ## Download dataset Download sharegpt dataset: `wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json` Other datasets can be found in: https://github.com/vllm-project/vllm/tree/main/benchmarks ## benchmark_latency Run the following under `vllm` source code root folder: ### baseline ``` python benchmarks/benchmark_latency.py --input-len 256 --output-len 256 --model microsoft/Phi-4-mini-instruct --batch-size 1 ``` ### int4wo-hqq ``` python benchmarks/benchmark_latency.py --input-len 256 --output-len 256 --model jerryzh168/phi4-mini-int4wo-hqq --batch-size 1 ``` ## benchmark_serving We also benchmarked the throughput in a serving environment. Run the following under `vllm` source code root folder: ### baseline Server: ``` vllm serve microsoft/Phi-4-mini-instruct --tokenizer microsoft/Phi-4-mini-instruct -O3 ``` Client: ``` python benchmarks/benchmark_serving.py --backend vllm --dataset-name sharegpt --tokenizer microsoft/Phi-4-mini-instruct --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json --model microsoft/Phi-4-mini-instruct --num-prompts 1 ``` ### int4wo-hqq Server: ``` vllm serve jerryzh168/phi4-mini-int4wo-hqq --tokenizer microsoft/Phi-4-mini-instruct -O3 ``` Client: ``` python benchmarks/benchmark_serving.py --backend vllm --dataset-name sharegpt --tokenizer microsoft/Phi-4-mini-instruct --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json --model jerryzh168/phi4-mini-int4wo-hqq --num-prompts 1 ``` # Serving with vllm We can use the same command we used in serving benchmarks to serve the model with vllm ``` vllm serve jerryzh168/phi4-mini-int4wo-hqq --tokenizer microsoft/Phi-4-mini-instruct -O3 ```
agentlans/hayao-miyazaki-quote
agentlans
2025-04-25T22:53:22Z
0
0
null
[ "ethics", "humanity", "art", "ai", "life", "document", "video-text-to-text", "ja", "en", "license:cc0-1.0", "region:us" ]
video-text-to-text
2025-04-25T22:41:42Z
--- license: cc0-1.0 language: - ja - en tags: - ethics - humanity - art - ai - life - document pipeline_tag: video-text-to-text --- <blockquote> Every morning... not recent days, but I see my friend who has a disability. It's so hard for him just to do a high five, his arm with stiff muscle reaching out to my hand. Now, thinking of him, I can't watch this stuff and find it interesting. Whoever creates this stuff has no idea what pain is or whatsoever. I am utterly disgusted. If you really want to make creepy stuff, you can go ahead and do it. I would never wish to incorporate this technology into my work at all. I strongly feel that this is an insult to life itself. - Hayao Miyazaki, Japanese animator, filmmaker, and manga artist. </blockquote> <blockquote> **Producer:** So what is your goal? **Chairman of media company:** Well, we would like to build a machine that can draw pictures like humans do. **Miyazaki:** I feel we are nearing to the end of times. We humans are losing faith in ourselves. </blockquote> **Reference** [Hayao Miyazaki's thoughts on an artificial intelligence](https://www.youtube.com/watch?v=ngZ0K3lWKRc)
deeponh/bengali_8b_3b_D1
deeponh
2025-04-25T22:35:07Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T22:32:16Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dzanbek/16651335-e942-487b-87b4-b2ba28816da8
dzanbek
2025-04-25T21:55:38Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "license:apache-2.0", "region:us" ]
null
2025-04-25T21:35:05Z
--- library_name: peft license: apache-2.0 base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO tags: - axolotl - generated_from_trainer model-index: - name: 16651335-e942-487b-87b4-b2ba28816da8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 039e297ae683b655_train_data.json ds_type: json format: custom path: /workspace/input_data/039e297ae683b655_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: dzanbek/16651335-e942-487b-87b4-b2ba28816da8 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/039e297ae683b655_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 744374a3-fddf-43c9-b5b6-239201c8a6f3 wandb_project: s56-2 wandb_run: your_name wandb_runid: 744374a3-fddf-43c9-b5b6-239201c8a6f3 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 16651335-e942-487b-87b4-b2ba28816da8 This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4954 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.4729 | 0.1871 | 200 | 0.4954 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
beingbatman/12_mae_1
beingbatman
2025-04-25T21:35:12Z
0
0
transformers
[ "transformers", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-large-finetuned-kinetics", "base_model:finetune:MCG-NJU/videomae-large-finetuned-kinetics", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2025-04-25T20:29:26Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-large-finetuned-kinetics tags: - generated_from_trainer metrics: - accuracy model-index: - name: 12_mae_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 12_mae_1 This model is a fine-tuned version of [MCG-NJU/videomae-large-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-large-finetuned-kinetics) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5988 - Accuracy: 0.65 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1380 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.625 | 0.0341 | 47 | 0.8198 | 0.5 | | 0.5277 | 1.0341 | 94 | 0.7605 | 0.5 | | 0.7932 | 2.0341 | 141 | 0.7654 | 0.5 | | 0.6538 | 3.0341 | 188 | 0.9007 | 0.5 | | 0.6074 | 4.0341 | 235 | 1.0098 | 0.5 | | 0.4785 | 5.0341 | 282 | 1.1076 | 0.5 | | 0.5103 | 6.0341 | 329 | 0.7418 | 0.5 | | 0.5061 | 7.0341 | 376 | 0.6970 | 0.55 | | 0.6851 | 8.0341 | 423 | 0.5988 | 0.65 | | 0.1797 | 9.0341 | 470 | 1.9490 | 0.5 | | 0.4935 | 10.0341 | 517 | 0.9920 | 0.5 | | 0.3693 | 11.0341 | 564 | 0.9637 | 0.6 | | 0.2567 | 12.0341 | 611 | 1.2065 | 0.5 | | 0.2815 | 13.0341 | 658 | 1.0990 | 0.65 | | 0.4836 | 14.0341 | 705 | 1.0447 | 0.65 | | 0.4417 | 15.0341 | 752 | 1.4382 | 0.6 | | 0.2275 | 16.0341 | 799 | 1.0702 | 0.6 | | 0.4017 | 17.0341 | 846 | 1.2412 | 0.65 | | 0.5722 | 18.0341 | 893 | 1.0678 | 0.6 | | 0.2099 | 19.0341 | 940 | 1.0791 | 0.65 | | 0.216 | 20.0341 | 987 | 1.3726 | 0.65 | | 0.1945 | 21.0341 | 1034 | 1.2961 | 0.55 | | 0.537 | 22.0341 | 1081 | 1.6146 | 0.55 | | 0.3413 | 23.0341 | 1128 | 1.6036 | 0.6 | | 0.093 | 24.0341 | 1175 | 1.5625 | 0.65 | | 0.1762 | 25.0341 | 1222 | 1.8394 | 0.55 | | 0.2729 | 26.0341 | 1269 | 1.8460 | 0.55 | | 0.2981 | 27.0341 | 1316 | 1.6553 | 0.6 | | 0.0867 | 28.0341 | 1363 | 1.7260 | 0.55 | | 0.3385 | 29.0123 | 1380 | 1.7314 | 0.55 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.0.1+cu117 - Datasets 3.0.1 - Tokenizers 0.20.0
Aya-In-Brooklyn/fitness_entity_extractor_ner_roberta_finetuned
Aya-In-Brooklyn
2025-04-25T21:10:39Z
0
0
null
[ "license:openrail++", "region:us" ]
null
2025-04-25T21:10:39Z
--- license: openrail++ ---
mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF
mradermacher
2025-04-25T21:00:11Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:ReadyArt/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B", "base_model:quantized:ReadyArt/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-04-25T15:48:53Z
--- base_model: ReadyArt/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/ReadyArt/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
philipfourie/bi-morse-code-Q4_0-GGUF
philipfourie
2025-04-25T20:42:11Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "gemma3_text", "llama-cpp", "gguf-my-repo", "en", "base_model:philipfourie/bi-morse-code", "base_model:quantized:philipfourie/bi-morse-code", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-25T20:42:02Z
--- base_model: philipfourie/bi-morse-code language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma3_text - llama-cpp - gguf-my-repo --- # philipfourie/bi-morse-code-Q4_0-GGUF This model was converted to GGUF format from [`philipfourie/bi-morse-code`](https://huggingface.co/philipfourie/bi-morse-code) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/philipfourie/bi-morse-code) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo philipfourie/bi-morse-code-Q4_0-GGUF --hf-file bi-morse-code-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo philipfourie/bi-morse-code-Q4_0-GGUF --hf-file bi-morse-code-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo philipfourie/bi-morse-code-Q4_0-GGUF --hf-file bi-morse-code-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo philipfourie/bi-morse-code-Q4_0-GGUF --hf-file bi-morse-code-q4_0.gguf -c 2048 ```
nerdigent/Darker_Sun_v1
nerdigent
2025-04-25T18:27:23Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:ReadyArt/Omega-Darker_The-Final-Directive-22B", "base_model:merge:ReadyArt/Omega-Darker_The-Final-Directive-22B", "base_model:crestf411/MS-sunfall-v0.7.0", "base_model:merge:crestf411/MS-sunfall-v0.7.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T18:19:10Z
--- base_model: - ReadyArt/Omega-Darker_The-Final-Directive-22B - crestf411/MS-sunfall-v0.7.0 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [ReadyArt/Omega-Darker_The-Final-Directive-22B](https://huggingface.co/ReadyArt/Omega-Darker_The-Final-Directive-22B) as a base. ### Models Merged The following models were included in the merge: * [crestf411/MS-sunfall-v0.7.0](https://huggingface.co/crestf411/MS-sunfall-v0.7.0) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: ReadyArt/Omega-Darker_The-Final-Directive-22B merge_method: dare_ties models: - model: ReadyArt/Omega-Darker_The-Final-Directive-22B parameters: weight: 0.5 - model: crestf411/MS-sunfall-v0.7.0 parameters: weight: 0.5 parameters: density: 0.3 normalize: true tokenizer_source: union dtype: bfloat16 ```
jdchang/full-dataset-bs-1024-lr-7e-5-sg-2-step-1944
jdchang
2025-04-25T18:21:17Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-04-25T18:21:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Triangle104/Dolphin-R1-Cydonia-v0.3-Q3_K_L-GGUF
Triangle104
2025-04-25T18:17:45Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:harkov000/Dolphin-R1-Cydonia-v0.3", "base_model:quantized:harkov000/Dolphin-R1-Cydonia-v0.3", "endpoints_compatible", "region:us" ]
null
2025-04-25T18:16:48Z
--- base_model: harkov000/Dolphin-R1-Cydonia-v0.3 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Triangle104/Dolphin-R1-Cydonia-v0.3-Q3_K_L-GGUF This model was converted to GGUF format from [`harkov000/Dolphin-R1-Cydonia-v0.3`](https://huggingface.co/harkov000/Dolphin-R1-Cydonia-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/harkov000/Dolphin-R1-Cydonia-v0.3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Dolphin-R1-Cydonia-v0.3-Q3_K_L-GGUF --hf-file dolphin-r1-cydonia-v0.3-q3_k_l.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Dolphin-R1-Cydonia-v0.3-Q3_K_L-GGUF --hf-file dolphin-r1-cydonia-v0.3-q3_k_l.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Dolphin-R1-Cydonia-v0.3-Q3_K_L-GGUF --hf-file dolphin-r1-cydonia-v0.3-q3_k_l.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Dolphin-R1-Cydonia-v0.3-Q3_K_L-GGUF --hf-file dolphin-r1-cydonia-v0.3-q3_k_l.gguf -c 2048 ```
kostiantynk1205/eefb6b3d-ad0a-4cf0-8bdd-68ea13f1d434
kostiantynk1205
2025-04-25T16:08:00Z
0
0
transformers
[ "transformers", "generated_from_trainer", "unsloth", "endpoints_compatible", "region:us" ]
null
2025-04-25T16:07:40Z
--- library_name: transformers model_name: kostiantynk1205/eefb6b3d-ad0a-4cf0-8bdd-68ea13f1d434 tags: - generated_from_trainer - unsloth licence: license --- # Model Card for kostiantynk1205/eefb6b3d-ad0a-4cf0-8bdd-68ea13f1d434 This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ### Framework versions - TRL: 0.12.0 - Transformers: 4.46.3 - Pytorch: 2.5.1+cu124 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/gpt2_1558M_final4_hf-i1-GGUF
mradermacher
2025-04-25T15:55:46Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:karpathy/gpt2_1558M_final4_hf", "base_model:quantized:karpathy/gpt2_1558M_final4_hf", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-04-25T14:16:46Z
--- base_model: karpathy/gpt2_1558M_final4_hf language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/karpathy/gpt2_1558M_final4_hf <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-IQ1_M.gguf) | i1-IQ1_M | 0.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-IQ2_S.gguf) | i1-IQ2_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-IQ2_M.gguf) | i1-IQ2_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.9 | very low quality | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-IQ3_S.gguf) | i1-IQ3_S | 1.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-Q2_K.gguf) | i1-Q2_K | 1.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-IQ3_M.gguf) | i1-IQ3_M | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/gpt2_1558M_final4_hf-i1-GGUF/resolve/main/gpt2_1558M_final4_hf.i1-Q6_K.gguf) | i1-Q6_K | 1.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
NoLimitation/distilbert-base-uncased-finetuned-emotion
NoLimitation
2025-04-25T15:51:06Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-25T15:23:27Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2192 - Accuracy: 0.9205 - F1: 0.9205 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8267 | 1.0 | 250 | 0.3160 | 0.9065 | 0.9056 | | 0.2519 | 2.0 | 500 | 0.2192 | 0.9205 | 0.9205 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
greenwich157/Llama-3.2-3B-Instruct-TelcoLLM-v2
greenwich157
2025-04-25T15:39:58Z
0
0
null
[ "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2025-04-25T13:30:56Z
--- license: apache-2.0 ---
mahtas-marin/wATCH.mahtas.marin.viral.video.original
mahtas-marin
2025-04-25T14:39:32Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-25T14:35:52Z
--- license: apache-2.0 --- <a data-target="animated-image.originalLink" rel="nofollow" href="https://t.co/RqB7gZez8s"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
5525FP/Llama-3.2-1B-Lora-spigot-10K-50-1745588248.5136423
5525FP
2025-04-25T13:37:37Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T13:37:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Fmuaddib/Qwen2.5-14B-Instruct-o4-mlx-fp16
Fmuaddib
2025-04-25T13:06:16Z
0
0
mlx
[ "mlx", "safetensors", "qwen2", "base_model:PeterLauLukCh/Qwen2.5-14B-Instruct-o4", "base_model:finetune:PeterLauLukCh/Qwen2.5-14B-Instruct-o4", "license:mit", "region:us" ]
null
2025-04-25T13:04:50Z
--- license: mit base_model: PeterLauLukCh/Qwen2.5-14B-Instruct-o4 tags: - mlx --- # Fmuaddib/Qwen2.5-14B-Instruct-o4-mlx-fp16 The Model [Fmuaddib/Qwen2.5-14B-Instruct-o4-mlx-fp16](https://huggingface.co/Fmuaddib/Qwen2.5-14B-Instruct-o4-mlx-fp16) was converted to MLX format from [PeterLauLukCh/Qwen2.5-14B-Instruct-o4](https://huggingface.co/PeterLauLukCh/Qwen2.5-14B-Instruct-o4) using mlx-lm version **0.22.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("Fmuaddib/Qwen2.5-14B-Instruct-o4-mlx-fp16") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
Szahriwar/BioMistral-7B-DARE-elife-lora
Szahriwar
2025-04-25T12:47:29Z
0
0
transformers
[ "transformers", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:BioMistral/BioMistral-7B-DARE", "base_model:finetune:BioMistral/BioMistral-7B-DARE", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-25T12:47:27Z
--- base_model: BioMistral/BioMistral-7B-DARE tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Szahriwar - **License:** apache-2.0 - **Finetuned from model :** BioMistral/BioMistral-7B-DARE This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF
mradermacher
2025-04-25T11:45:35Z
0
0
transformers
[ "transformers", "gguf", "unsloth", "en", "base_model:grabbe-gymnasium-detmold/grabbe-ai-qwen2.5-3b", "base_model:quantized:grabbe-gymnasium-detmold/grabbe-ai-qwen2.5-3b", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-04-25T10:54:54Z
--- base_model: grabbe-gymnasium-detmold/grabbe-ai-qwen2.5-3b language: - en library_name: transformers quantized_by: mradermacher tags: - unsloth --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/grabbe-gymnasium-detmold/grabbe-ai-qwen2.5-3b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-IQ2_S.gguf) | i1-IQ2_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.3 | very low quality | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-Q2_K.gguf) | i1-Q2_K | 1.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-IQ3_M.gguf) | i1-IQ3_M | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.7 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.9 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-Q4_0.gguf) | i1-Q4_0 | 1.9 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.9 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-Q4_1.gguf) | i1-Q4_1 | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/grabbe-ai-qwen2.5-3b-i1-GGUF/resolve/main/grabbe-ai-qwen2.5-3b.i1-Q6_K.gguf) | i1-Q6_K | 2.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Culturedniichan/mergekit-ties-uzreyxm
Culturedniichan
2025-04-25T11:18:58Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4", "base_model:merge:ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4", "base_model:ReadyArt/Forgotten-Safeword-24B-V2.2", "base_model:merge:ReadyArt/Forgotten-Safeword-24B-V2.2", "base_model:TroyDoesAI/BlackSheep-24B", "base_model:merge:TroyDoesAI/BlackSheep-24B", "base_model:unsloth/Mistral-Small-24B-Instruct-2501", "base_model:merge:unsloth/Mistral-Small-24B-Instruct-2501", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T11:07:34Z
--- base_model: - unsloth/Mistral-Small-24B-Instruct-2501 - ReadyArt/Forgotten-Safeword-24B-V2.2 - TroyDoesAI/BlackSheep-24B - ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Mistral-Small-24B-Instruct-2501](https://huggingface.co/unsloth/Mistral-Small-24B-Instruct-2501) as a base. ### Models Merged The following models were included in the merge: * [ReadyArt/Forgotten-Safeword-24B-V2.2](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B-V2.2) * [TroyDoesAI/BlackSheep-24B](https://huggingface.co/TroyDoesAI/BlackSheep-24B) * [ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4](https://huggingface.co/ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: unsloth/Mistral-Small-24B-Instruct-2501 - model: TroyDoesAI/BlackSheep-24B parameters: density: 0.50 weight: 0.60 - model: ReadyArt/Forgotten-Safeword-24B-V2.2 parameters: density: 0.35 weight: 0.15 - model: ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4 parameters: density: 0.30 weight: 0.10 merge_method: ties base_model: unsloth/Mistral-Small-24B-Instruct-2501 parameters: normalize: true dtype: bfloat16 ```
amDANIEL2024/amooti-v1-offline
amDANIEL2024
2025-04-25T11:17:28Z
0
0
transformers
[ "transformers", "pytorch", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/gemma-2b-bnb-4bit", "base_model:finetune:unsloth/gemma-2b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T11:15:20Z
--- base_model: unsloth/gemma-2b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** amDANIEL2024 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/bge-micro-GGUF
mradermacher
2025-04-25T10:28:43Z
0
0
transformers
[ "transformers", "gguf", "sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "en", "base_model:TaylorAI/bge-micro", "base_model:quantized:TaylorAI/bge-micro", "endpoints_compatible", "region:us" ]
feature-extraction
2025-04-25T10:11:34Z
--- base_model: TaylorAI/bge-micro language: - en library_name: transformers quantized_by: mradermacher tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/TaylorAI/bge-micro <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/bge-micro-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/bge-micro-GGUF/resolve/main/bge-micro.Q2_K.gguf) | Q2_K | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/bge-micro-GGUF/resolve/main/bge-micro.Q3_K_S.gguf) | Q3_K_S | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/bge-micro-GGUF/resolve/main/bge-micro.IQ4_XS.gguf) | IQ4_XS | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/bge-micro-GGUF/resolve/main/bge-micro.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/bge-micro-GGUF/resolve/main/bge-micro.Q3_K_L.gguf) | Q3_K_L | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/bge-micro-GGUF/resolve/main/bge-micro.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bge-micro-GGUF/resolve/main/bge-micro.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bge-micro-GGUF/resolve/main/bge-micro.Q5_K_S.gguf) | Q5_K_S | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/bge-micro-GGUF/resolve/main/bge-micro.Q5_K_M.gguf) | Q5_K_M | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/bge-micro-GGUF/resolve/main/bge-micro.Q6_K.gguf) | Q6_K | 0.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/bge-micro-GGUF/resolve/main/bge-micro.Q8_0.gguf) | Q8_0 | 0.1 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/bge-micro-GGUF/resolve/main/bge-micro.f16.gguf) | f16 | 0.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
PhoenixB/4c91534a-9719-47b6-80cc-025428164695
PhoenixB
2025-04-25T10:21:24Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:unsloth/OpenHermes-2.5-Mistral-7B", "base_model:quantized:unsloth/OpenHermes-2.5-Mistral-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-04-25T10:16:31Z
--- base_model: unsloth/OpenHermes-2.5-Mistral-7B library_name: transformers model_name: 4c91534a-9719-47b6-80cc-025428164695 tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for 4c91534a-9719-47b6-80cc-025428164695 This model is a fine-tuned version of [unsloth/OpenHermes-2.5-Mistral-7B](https://huggingface.co/unsloth/OpenHermes-2.5-Mistral-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="PhoenixB/4c91534a-9719-47b6-80cc-025428164695", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients-On-Demand/runs/d0r9dmdv) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0 - Transformers: 4.46.3 - Pytorch: 2.5.1+cu124 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF
mradermacher
2025-04-25T10:12:36Z
0
0
transformers
[ "transformers", "gguf", "sentence-transformers", "feature-extraction", "sentence-similarity", "en", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:search_qa", "dataset:eli5", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/QQP", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/Amazon-QA", "dataset:embedding-data/WikiAnswers", "base_model:SeyedAli/Multilingual-Text-Semantic-Search-Siamese-BERT-V1", "base_model:quantized:SeyedAli/Multilingual-Text-Semantic-Search-Siamese-BERT-V1", "endpoints_compatible", "region:us", "imatrix" ]
feature-extraction
2025-04-25T10:10:21Z
--- base_model: SeyedAli/Multilingual-Text-Semantic-Search-Siamese-BERT-V1 datasets: - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - search_qa - eli5 - natural_questions - trivia_qa - embedding-data/QQP - embedding-data/PAQ_pairs - embedding-data/Amazon-QA - embedding-data/WikiAnswers language: - en library_name: transformers quantized_by: mradermacher tags: - sentence-transformers - feature-extraction - sentence-similarity --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/SeyedAli/Multilingual-Text-Semantic-Search-Siamese-BERT-V1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-IQ1_M.gguf) | i1-IQ1_M | 0.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-IQ2_S.gguf) | i1-IQ2_S | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-IQ2_M.gguf) | i1-IQ2_M | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.1 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-IQ3_S.gguf) | i1-IQ3_S | 0.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-Q2_K.gguf) | i1-Q2_K | 0.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.1 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-IQ3_M.gguf) | i1-IQ3_M | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.1 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-Q4_0.gguf) | i1-Q4_0 | 0.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-Q4_1.gguf) | i1-Q4_1 | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.1 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.1 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Multilingual-Text-Semantic-Search-Siamese-BERT-V1-i1-GGUF/resolve/main/Multilingual-Text-Semantic-Search-Siamese-BERT-V1.i1-Q6_K.gguf) | i1-Q6_K | 0.1 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
sajeewa/emotion-classification-bert
sajeewa
2025-04-25T09:25:03Z
102
0
null
[ "safetensors", "bert", "emotion-classification", "emotion", "mental-health", "text-classification", "en", "dataset:google-research-datasets/go_emotions", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:mit", "region:us" ]
text-classification
2025-04-18T09:36:02Z
--- license: mit language: - en tags: - emotion-classification - emotion - mental-health - bert - text-classification pipeline_tag: text-classification base_model: - bert-base-uncased datasets: - google-research-datasets/go_emotions --- # 😄 Emotion Classification with BERT This model is a fine-tuned version of `bert-base-uncased` for **multi-label emotion classification**. It predicts **eight basic emotions** from a given piece of text using sigmoid-based multi-label classification. --- ## 🧠 Model Details - **Base model**: `bert-base-uncased` - **Fine-tuned for**: Multi-label emotion classification - **Emotion labels**: - `anger` - `fear` - `disgust` - `sadness` - `surprise` - `joy` - `anticipation` - `trust` - **Intended use**: Emotion detection in messages, sentiment analysis, chatbot tuning, mental health signal recognition, etc. --- ## 📦 Usage ```python import torch from transformers import BertTokenizer, BertForSequenceClassification # Set device device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Load model and tokenizer model_path = "sajeewa/emotion-classification-bert" emotion_labels = ["anger", "fear", "disgust", "sadness", "surprise", "joy", "anticipation", "trust"] tokenizer = BertTokenizer.from_pretrained(model_path) model = BertForSequenceClassification.from_pretrained(model_path, num_labels=len(emotion_labels)).to(device) # Emotion prediction function def predict_emotions(text: str): model.eval() inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=50).to(device) inputs.pop("token_type_ids", None) with torch.no_grad(): logits = model(**inputs).logits probs = torch.sigmoid(logits).cpu().numpy()[0] return {label: round(float(score), 4) for label, score in zip(emotion_labels, probs)} # Example usage example_text = "I'm feeling lonely today." predictions = predict_emotions(example_text) dominant_emotion = max(predictions, key=predictions.get) print({dominant_emotion: predictions[dominant_emotion]})
isaiahbjork/poker-reasoning-3b-lora
isaiahbjork
2025-04-25T06:20:29Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-25T04:41:06Z
--- base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** isaiahbjork - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
soupai/roko4
soupai
2025-04-25T05:19:32Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-24T17:00:01Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
byh711/Florence2-Table-detection
byh711
2025-04-25T04:41:16Z
52
0
transformers
[ "transformers", "safetensors", "florence2", "text-generation", "generated_from_trainer", "custom_code", "base_model:microsoft/Florence-2-base-ft", "base_model:finetune:microsoft/Florence-2-base-ft", "license:mit", "autotrain_compatible", "region:us" ]
text-generation
2025-03-25T14:37:34Z
--- base_model: microsoft/Florence-2-base-ft library_name: transformers license: mit tags: - generated_from_trainer model-index: - name: Florence2-Table-detection results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/byh711/Table_detection/runs/qf7nkjug) # Florence2-Table-detection This model is a fine-tuned version of [microsoft/Florence-2-base-ft](https://huggingface.co/microsoft/Florence-2-base-ft) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
Peccatum/wavlm-base-res-cross-att-v4-max
Peccatum
2025-04-25T03:51:01Z
0
0
transformers
[ "transformers", "safetensors", "wavlm", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T03:46:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DangMinh21/code-search-net-tokenizer
DangMinh21
2025-04-25T03:40:40Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T03:40:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
spiriteddutiful/spiriteddutiful
spiriteddutiful
2025-04-25T03:22:58Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-04-25T03:22:58Z
--- license: bigscience-openrail-m ---
mlfoundations-dev/b2_science_fasttext_neg_wikipedia_1k
mlfoundations-dev
2025-04-25T02:57:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T02:06:19Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: b2_science_fasttext_neg_wikipedia_1k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # b2_science_fasttext_neg_wikipedia_1k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_science_fasttext_neg_wikipedia_1k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 24 - total_train_batch_size: 96 - total_eval_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 7.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
dgambettaphd/M_llm3_gen10_run0_X_doc1000_synt64_tot128_SYNLAST
dgambettaphd
2025-04-24T23:10:21Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-24T23:10:04Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mergekit-community/mergekit-model_stock-qtseiad
mergekit-community
2025-04-24T23:09:28Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:ReadyArt/Omega-Darker_The-Final-Directive-12B", "base_model:merge:ReadyArt/Omega-Darker_The-Final-Directive-12B", "base_model:mergekit-community/mergekit-model_stock-zjszwdf", "base_model:merge:mergekit-community/mergekit-model_stock-zjszwdf", "base_model:redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS", "base_model:merge:redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS", "base_model:redrix/GodSlayer-12B-ABYSS", "base_model:merge:redrix/GodSlayer-12B-ABYSS", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-24T23:03:31Z
--- base_model: - redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS - mergekit-community/mergekit-model_stock-zjszwdf - ReadyArt/Omega-Darker_The-Final-Directive-12B - redrix/GodSlayer-12B-ABYSS library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mergekit-community/mergekit-model_stock-zjszwdf](https://huggingface.co/mergekit-community/mergekit-model_stock-zjszwdf) as a base. ### Models Merged The following models were included in the merge: * [redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS](https://huggingface.co/redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS) * [ReadyArt/Omega-Darker_The-Final-Directive-12B](https://huggingface.co/ReadyArt/Omega-Darker_The-Final-Directive-12B) * [redrix/GodSlayer-12B-ABYSS](https://huggingface.co/redrix/GodSlayer-12B-ABYSS) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS - model: redrix/GodSlayer-12B-ABYSS - model: ReadyArt/Omega-Darker_The-Final-Directive-12B base_model: mergekit-community/mergekit-model_stock-zjszwdf merge_method: model_stock dtype: bfloat16 chat_template: "chatml" tokenizer: source: union ```
JPBergmann/doctr-torch-parseq-german
JPBergmann
2025-04-24T22:14:11Z
0
0
null
[ "pytorch", "region:us" ]
null
2025-04-24T22:14:07Z
language: en <p align="center"> <img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%"> </p> **Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch** ## Task: recognition https://github.com/mindee/doctr ### Example usage: ```python >>> from doctr.io import DocumentFile >>> from doctr.models import ocr_predictor, from_hub >>> img = DocumentFile.from_images(['<image_path>']) >>> # Load your model from the hub >>> model = from_hub('mindee/my-model') >>> # Pass it to the predictor >>> # If your model is a recognition model: >>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large', >>> reco_arch=model, >>> pretrained=True) >>> # If your model is a detection model: >>> predictor = ocr_predictor(det_arch=model, >>> reco_arch='crnn_mobilenet_v3_small', >>> pretrained=True) >>> # Get your predictions >>> res = predictor(img) ```
OpenLLM-Ro/RoGemma2-9b-Instruct
OpenLLM-Ro
2025-04-24T19:22:58Z
239
2
null
[ "safetensors", "gemma2", "ro", "dataset:OpenLLM-Ro/ro_sft_alpaca", "dataset:OpenLLM-Ro/ro_sft_alpaca_gpt4", "dataset:OpenLLM-Ro/ro_sft_dolly", "dataset:OpenLLM-Ro/ro_sft_selfinstruct_gpt4", "dataset:OpenLLM-Ro/ro_sft_norobots", "dataset:OpenLLM-Ro/ro_sft_orca", "dataset:OpenLLM-Ro/ro_sft_camel", "dataset:OpenLLM-Ro/ro_sft_oasst", "dataset:OpenLLM-Ro/ro_sft_ultrachat", "dataset:OpenLLM-Ro/ro_sft_magpie_mt", "dataset:OpenLLM-Ro/ro_sft_magpie_reasoning", "arxiv:2406.18266", "base_model:google/gemma-2-9b-it", "base_model:finetune:google/gemma-2-9b-it", "license:cc-by-nc-4.0", "model-index", "region:us" ]
null
2024-10-10T14:22:07Z
--- license: cc-by-nc-4.0 language: - ro base_model: - google/gemma-2-9b-it datasets: - OpenLLM-Ro/ro_sft_alpaca - OpenLLM-Ro/ro_sft_alpaca_gpt4 - OpenLLM-Ro/ro_sft_dolly - OpenLLM-Ro/ro_sft_selfinstruct_gpt4 - OpenLLM-Ro/ro_sft_norobots - OpenLLM-Ro/ro_sft_orca - OpenLLM-Ro/ro_sft_camel - OpenLLM-Ro/ro_sft_oasst - OpenLLM-Ro/ro_sft_ultrachat - OpenLLM-Ro/ro_sft_magpie_mt - OpenLLM-Ro/ro_sft_magpie_reasoning model-index: - name: OpenLLM-Ro/RoGemma2-9b-Instruct-2025-04-23 results: - task: type: text-generation dataset: name: RoMT-Bench type: RoMT-Bench metrics: - name: Score type: Score value: 6.78 - task: type: text-generation dataset: name: RoCulturaBench type: RoCulturaBench metrics: - name: Score type: Score value: 4.89 - task: type: text-generation dataset: name: Romanian_Academic_Benchmarks type: Romanian_Academic_Benchmarks metrics: - name: Average accuracy type: accuracy value: 54.39 - task: type: text-generation dataset: name: OpenLLM-Ro/ro_arc_challenge type: OpenLLM-Ro/ro_arc_challenge metrics: - name: Average accuracy type: accuracy value: 50.24 - task: type: text-generation dataset: name: OpenLLM-Ro/ro_mmlu type: OpenLLM-Ro/ro_mmlu metrics: - name: Average accuracy type: accuracy value: 62.00 - task: type: text-generation dataset: name: OpenLLM-Ro/ro_winogrande type: OpenLLM-Ro/ro_winogrande metrics: - name: Average accuracy type: accuracy value: 70.38 - task: type: text-generation dataset: name: OpenLLM-Ro/ro_hellaswag type: OpenLLM-Ro/ro_hellaswag metrics: - name: Average accuracy type: accuracy value: 52.25 - task: type: text-generation dataset: name: OpenLLM-Ro/ro_gsm8k type: OpenLLM-Ro/ro_gsm8k metrics: - name: Average accuracy type: accuracy value: 40.51 - task: type: text-generation dataset: name: OpenLLM-Ro/ro_truthfulqa type: OpenLLM-Ro/ro_truthfulqa metrics: - name: Average accuracy type: accuracy value: 50.97 - task: type: text-generation dataset: name: LaRoSeDa_binary type: LaRoSeDa_binary metrics: - name: Average macro-f1 type: macro-f1 value: 84.23 - task: type: text-generation dataset: name: LaRoSeDa_multiclass type: LaRoSeDa_multiclass metrics: - name: Average macro-f1 type: macro-f1 value: 60.14 - task: type: text-generation dataset: name: WMT_EN-RO type: WMT_EN-RO metrics: - name: Average bleu type: bleu value: 17.78 - task: type: text-generation dataset: name: WMT_RO-EN type: WMT_RO-EN metrics: - name: Average bleu type: bleu value: 18.24 - task: type: text-generation dataset: name: XQuAD type: XQuAD metrics: - name: Average exact_match type: exact_match value: 49.22 - task: type: text-generation dataset: name: XQuAD type: XQuAD metrics: - name: Average f1 type: f1 value: 66.33 - task: type: text-generation dataset: name: STS type: STS metrics: - name: Average spearman type: spearman value: 70.17 - task: type: text-generation dataset: name: STS type: STS metrics: - name: Average pearson type: pearson value: 70.80 - task: type: text-generation dataset: name: RoMT-Bench type: RoMT-Bench metrics: - name: First turn type: Score value: 7.00 - name: Second turn type: Score value: 6.55 - task: type: text-generation dataset: name: OpenLLM-Ro/ro_arc_challenge type: OpenLLM-Ro/ro_arc_challenge metrics: - name: 0-shot type: accuracy value: 47.47 - name: 1-shot type: accuracy value: 50.56 - name: 3-shot type: accuracy value: 50.73 - name: 5-shot type: accuracy value: 50.39 - name: 10-shot type: accuracy value: 50.99 - name: 25-shot type: accuracy value: 51.33 - task: type: text-generation dataset: name: OpenLLM-Ro/ro_mmlu type: OpenLLM-Ro/ro_mmlu metrics: - name: 0-shot type: accuracy value: 58.73 - name: 1-shot type: accuracy value: 60.12 - name: 3-shot type: accuracy value: 64.93 - name: 5-shot type: accuracy value: 64.21 - task: type: text-generation dataset: name: OpenLLM-Ro/ro_winogrande type: OpenLLM-Ro/ro_winogrande metrics: - name: 0-shot type: accuracy value: 66.06 - name: 1-shot type: accuracy value: 70.40 - name: 3-shot type: accuracy value: 72.30 - name: 5-shot type: accuracy value: 72.77 - task: type: text-generation dataset: name: OpenLLM-Ro/ro_hellaswag type: OpenLLM-Ro/ro_hellaswag metrics: - name: 0-shot type: accuracy value: 56.30 - name: 1-shot type: accuracy value: 58.29 - name: 3-shot type: accuracy value: 50.88 - name: 5-shot type: accuracy value: 44.38 - name: 10-shot type: accuracy value: 51.41 - task: type: text-generation dataset: name: OpenLLM-Ro/ro_gsm8k type: OpenLLM-Ro/ro_gsm8k metrics: - name: 1-shot type: accuracy value: 27.29 - name: 3-shot type: accuracy value: 39.04 - name: 5-shot type: accuracy value: 55.19 - task: type: text-generation dataset: name: LaRoSeDa_binary type: LaRoSeDa_binary metrics: - name: 0-shot type: macro-f1 value: 59.19 - name: 1-shot type: macro-f1 value: 94.22 - name: 3-shot type: macro-f1 value: 93.24 - name: 5-shot type: macro-f1 value: 90.27 - task: type: text-generation dataset: name: LaRoSeDa_multiclass type: LaRoSeDa_multiclass metrics: - name: 0-shot type: macro-f1 value: 32.52 - name: 1-shot type: macro-f1 value: 68.64 - name: 3-shot type: macro-f1 value: 70.14 - name: 5-shot type: macro-f1 value: 69.26 - task: type: text-generation dataset: name: WMT_EN-RO type: WMT_EN-RO metrics: - name: 0-shot type: bleu value: 1.96 - name: 1-shot type: bleu value: 27.30 - name: 3-shot type: bleu value: 28.31 - name: 5-shot type: bleu value: 13.56 - task: type: text-generation dataset: name: WMT_RO-EN type: WMT_RO-EN metrics: - name: 0-shot type: bleu value: 0.66 - name: 1-shot type: bleu value: 26.76 - name: 3-shot type: bleu value: 31.88 - name: 5-shot type: bleu value: 13.66 - task: type: text-generation dataset: name: XQuAD_EM type: XQuAD_EM metrics: - name: 0-shot type: exact_match value: 49.92 - name: 1-shot type: exact_match value: 47.98 - name: 3-shot type: exact_match value: 45.71 - name: 5-shot type: exact_match value: 53.28 - task: type: text-generation dataset: name: XQuAD_F1 type: XQuAD_F1 metrics: - name: 0-shot type: f1 value: 67.52 - name: 1-shot type: f1 value: 63.97 - name: 3-shot type: f1 value: 62.39 - name: 5-shot type: f1 value: 71.43 - task: type: text-generation dataset: name: STS_Spearman type: STS_Spearman metrics: - name: 1-shot type: spearman value: 82.53 - name: 3-shot type: spearman value: 65.73 - name: 5-shot type: spearman value: 62.25 - task: type: text-generation dataset: name: STS_Pearson type: STS_Pearson metrics: - name: 1-shot type: pearson value: 82.89 - name: 3-shot type: pearson value: 66.26 - name: 5-shot type: pearson value: 63.25 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This model points/is identical to [RoGemma2-9b-Instruct-2025-04-23](https://huggingface.co/OpenLLM-Ro/RoGemma2-9b-Instruct-2025-04-23). RoGemma2 is a family of pretrained and fine-tuned generative text models for Romanian. This is the repository for the **instruct 9B model**. Links to other models can be found at the bottom of this page. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> OpenLLM-Ro represents the first open-source effort to build a LLM specialized for Romanian. OpenLLM-Ro developed and publicly releases a collection of Romanian LLMs, both in the form of foundational model and instruct and chat variants. - **Developed by:** OpenLLM-Ro <!-- - **Funded by [optional]:** [More Information Needed] --> <!-- - **Shared by [optional]:** [More Information Needed] --> <!-- - **Model type:** [More Information Needed] --> - **Language(s):** Romanian - **License:** cc-by-nc-4.0 - **Finetuned from model:** [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) - **Trained using:** [RoAlpaca](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_alpaca), [RoAlpacaGPT4](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_alpaca_gpt4), [RoDolly](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_dolly), [RoSelfInstruct](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_selfinstruct_gpt4), [RoNoRobots](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_norobots), [RoOrca](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_orca), [RoCamel](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_camel), [RoOpenAssistant](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_oasst), [RoUltraChat](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_ultrachat), [RoMagpiePro](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_magpie_mt), [RoMagpieReasoning](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_magpie_reasoning) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/OpenLLM-Ro/LLaMA-Factory - **Paper:** https://arxiv.org/abs/2406.18266 ## Intended Use ### Intended Use Cases RoGemma2 is intented for research use in Romanian. Base models can be adapted for a variety of natural language tasks while instruction and chat tuned models are intended for assistant-like chat. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> Use in any manner that violates the license, any applicable laws or regluations, use in languages other than Romanian. ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("OpenLLM-Ro/RoGemma2-9b-Instruct") model = AutoModelForCausalLM.from_pretrained("OpenLLM-Ro/RoGemma2-9b-Instruct") instruction = "Ce jocuri de societate pot juca cu prietenii mei?" chat = [ {"role": "user", "content": instruction}, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, system_message="") inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") outputs = model.generate(input_ids=inputs, max_new_tokens=128) print(tokenizer.decode(outputs[0])) ``` ## Academic Benchmarks <table> <tbody> <tr> <td><strong>Model</strong></td> <td><strong><center>Average</center></strong></td> <td><strong><center>ARC</center></strong></td> <td><strong><center>MMLU</center></strong></td> <td><strong><center>Winogrande</center></strong></td> <td><strong><center>Hellaswag</center></strong></td> <td><strong><center>GSM8k</center></strong></td> <td><strong><center>TruthfulQA</center></strong></td> </tr> <tr> <td>gemma-2-9b-it</td><td><center>56.22</center></td><td><center>50.33</center></td><td><center><strong>64.01</strong></center></td><td><center>64.88</center></td><td><center>63.11</center></td><td><center>41.95</center></td><td><center>53.03</center></td> </tr> <tr> <td>RoGemma2-9b-Instruct-2024-10-09</td><td><center>57.06</center></td><td><center><strong>56.20</strong></center></td><td><center>62.98</center></td><td><center>71.00</center></td><td><center>60.52</center></td><td><center>37.86</center></td><td><center>53.77</center></td> </tr> <tr> <td><em>RoGemma2-9b-Instruct-2025-04-23</em></td><td><center><em>54.39</em></center></td><td><center><em>50.24</em></center></td><td><center><em>62.00</em></center></td><td><center><em>70.38</em></center></td><td><center><em>52.25</em></center></td><td><center><em>40.51</em></center></td><td><center><em>50.97</em></center></td> </tr> <tr> <td>RoGemma2-9b-Instruct-DPO-2024-10-09</td><td><center>59.08</center></td><td><center>54.10</center></td><td><center>63.41</center></td><td><center>70.02</center></td><td><center>59.35</center></td><td><center><strong>57.24</strong></center></td><td><center>50.39</center></td> </tr> <tr> <td>RoGemma2-9b-Instruct-DPO-2025-04-23</td><td><center><strong>59.79</strong></center></td><td><center>55.66</center></td><td><center>64.00</center></td><td><center><strong>73.16</strong></center></td><td><center><strong>64.26</strong></center></td><td><center>37.80</center></td><td><center><strong>63.86</strong></center></td> </tr> </tbody> </table> ## Downstream tasks <table> <tbody> <tr> <td></td> <td colspan="4"><center><strong>LaRoSeDa</strong></center></td> <td colspan="4"><center><strong>WMT</strong></center></td> </tr> <tr> <td></td> <td colspan="2"><center><strong>Few-shot</strong></center></td> <td colspan="2"><center><strong>Finetuned</strong></center></td> <td colspan="2"><center><strong>Few-shot</strong></center></td> <td colspan="2"><center><strong>Finetuned</strong></center></td> </tr> <tr> <td><strong>Model</strong></td> <td><center><strong>Binary<br>(Macro F1)</strong></center></td> <td><center><strong>Multiclass<br>(Macro F1)</strong></center></td> <td><center><strong>Binary<br>(Macro F1)</strong></center></td> <td><center><strong>Multiclass<br>(Macro F1)</strong></center></td> <td><center><strong>EN-RO<br>(Bleu)</strong></center></td> <td><center><strong>RO-EN<br>(Bleu)</strong></center></td> <td><center><strong>EN-RO<br>(Bleu)</strong></center></td> <td><center><strong>RO-EN<br>(Bleu)</strong></center> </tr> <tr> <td>gemma-2-9b-it</td><td><center>90.82</center></td><td><center>52.51</center></td><td><center><strong>98.97</strong></center></td><td><center>86.02</center></td><td><center>19.97</center></td><td><center><strong>28.94</strong></center></td><td><center>27.94</center></td><td><center><strong>41.61</strong></center></td> </tr> <tr> <td>RoGemma2-9b-Instruct-2024-10-09</td><td><center>96.19</center></td><td><center>62.49</center></td><td><center>98.93</center></td><td><center><strong>88.33</strong></center></td><td><center>25.74</center></td><td><center>23.16</center></td><td><center><strong>28.43</strong></center></td><td><center>40.94</center></td> </tr> <tr> <td><em>RoGemma2-9b-Instruct-2025-04-23</em></td><td><center><em>84.23</em></center></td><td><center><em>60.14</em></center></td><td><center><em>-</em></center></td><td><center><em>-</em></center></td><td><center><em>17.78</em></center></td><td><center><em>18.24</em></center></td><td><center><em>-</em></center></td><td><center><em>-</em></center></td> </tr> <tr> <td>RoGemma2-9b-Instruct-DPO-2024-10-09</td><td><center><strong>97.74</strong></center></td><td><center><strong>67.40</strong></center></td><td><center>-</center></td><td><center>-</center></td><td><center>27.32</center></td><td><center>15.96</center></td><td><center>-</center></td><td><center>-</center></td> </tr> <tr> <td>RoGemma2-9b-Instruct-DPO-2025-04-23</td><td><center>82.84</center></td><td><center>65.95</center></td><td><center>-</center></td><td><center>-</center></td><td><center><strong>28.16</strong></center></td><td><center>19.34</center></td><td><center>-</center></td><td><center>-</center></td> </tr> </tbody> </table> <table> <tbody> <tr> <td></td> <td colspan="4"><center><strong>XQuAD</strong></center></td> <td colspan="4"><center><strong>STS</strong></center></td> </tr> <tr> <td></td> <td colspan="2"><center><strong>Few-shot</strong></center></td> <td colspan="2"><center><strong>Finetuned</strong></center></td> <td colspan="2"><center><strong>Few-shot</strong></center></td> <td colspan="2"><center><strong>Finetuned</strong></center></td> </tr> <tr> <td><strong>Model</strong></td> <td><center><strong>(EM)</strong></center></td> <td><center><strong>(F1)</strong></center></td> <td><center><strong>(EM)</strong></center></td> <td><center><strong>(F1)</strong></center></td> <td><center><strong>(Spearman)</strong></center></td> <td><center><strong>(Pearson)</strong></center></td> <td><center><strong>(Spearman)</strong></center></td> <td><center><strong>(Pearson)</strong></center></td> </tr> <tr> <td>gemma-2-9b-it</td><td><center>37.56</center></td><td><center>57.48</center></td><td><center><strong>71.09</strong></center></td><td><center><strong>84.78</strong></center></td><td><center>71.39</center></td><td><center>71.73</center></td><td><center>89.07</center></td><td><center>89.29</center></td> </tr> <tr> <td>RoGemma2-9b-Instruct-2024-10-09</td><td><center><strong>51.37</strong></center></td><td><center><strong>70.74</strong></center></td><td><center>50.00</center></td><td><center>64.10</center></td><td><center>77.15</center></td><td><center>77.10</center></td><td><center><strong>89.45</strong></center></td><td><center><strong>89.89</strong></center></td> </tr> <tr> <td><em>RoGemma2-9b-Instruct-2025-04-23</em></td><td><center><em>49.22</em></center></td><td><center><em>66.33</em></center></td><td><center><em>-</em></center></td><td><center><em>-</em></center></td><td><center><em>70.17</em></center></td><td><center><em>70.80</em></center></td><td><center><em>-</em></center></td><td><center><em>-</em></center></td> </tr> <tr> <td>RoGemma2-9b-Instruct-DPO-2024-10-09</td><td><center>32.42</center></td><td><center>58.68</center></td><td><center>-</center></td><td><center>-</center></td><td><center><strong>80.82</strong></center></td><td><center><strong>81.50</strong></center></td><td><center>-</center></td><td><center>-</center></td> </tr> <tr> <td>RoGemma2-9b-Instruct-DPO-2025-04-23</td><td><center>30.82</center></td><td><center>48.53</center></td><td><center>-</center></td><td><center>-</center></td><td><center>73.24</center></td><td><center>73.13</center></td><td><center>-</center></td><td><center>-</center></td> </tr> </tbody> </table> ## MT-Bench <table> <tbody> <tr> <td><strong>Model</strong></td> <td><strong><center>Average</center></strong></td> <td><strong><center>1st turn</center></strong></td> <td><strong><center>2nd turn</center></strong></td> <td><strong><center>Answers in Ro</center></strong></td> </tr> <tr> <td>gemma-2-9b-it</td><td><center><strong>7.50</strong></center></td><td><center><strong>7.91</strong></center></td><td><center><strong>7.09</strong></center></td><td><center>159/160</center></td> </tr> <tr> <td>RoGemma2-9b-Instruct-2024-10-09</td><td><center>6.08</center></td><td><center>6.78</center></td><td><center>5.39</center></td><td><center><strong>160/160</strong></center></td> </tr> <tr> <td><em>RoGemma2-9b-Instruct-2025-04-23</em></td><td><center><em>6.78</em></center></td><td><center><em>7.00</em></center></td><td><center><em>6.55</em></center></td><td><center><em><strong>160/160</strong></em></center></td> </tr> <tr> <td>RoGemma2-9b-Instruct-DPO-2024-10-09</td><td><center>6.77</center></td><td><center>7.24</center></td><td><center>6.30</center></td><td><center><strong>160/160</strong></center></td> </tr> <tr> <td>RoGemma2-9b-Instruct-DPO-2025-04-23</td><td><center>7.26</center></td><td><center>7.65</center></td><td><center>6.86</center></td><td><center><strong>160/160</strong></center></td> </tr> </tbody> </table> ## RoCulturaBench <table> <tbody> <tr> <td><strong>Model</strong></td> <td><strong><center>Average</center></strong></td> <td><strong><center>Answers in Ro</center></strong></td> </tr> <tr> <td>gemma-2-9b-it</td><td><center><strong>5.68</strong></center></td><td><center><strong>100/100</strong></center></td> </tr> <tr> <td>RoGemma2-9b-Instruct-2024-10-09</td><td><center>4.20</center></td><td><center><strong>100/100</strong></center></td> </tr> <tr> <td><em>RoGemma2-9b-Instruct-2025-04-23</em></td><td><center><em>4.89</em></center></td><td><center><em><strong>100/100</strong></em></center></td> </tr> <tr> <td>RoGemma2-9b-Instruct-DPO-2024-10-09</td><td><center>4.83</center></td><td><center><strong>100/100</strong></center></td> </tr> <tr> <td>RoGemma2-9b-Instruct-DPO-2025-04-23</td><td><center>5.36</center></td><td><center><strong>100/100</strong></center></td> </tr> </tbody> </table> ## RoGemma2 Model Family | Model | Link | |--------------------|:--------:| |RoGemma2-9b-Instruct-2024-10-09| [link](https://huggingface.co/OpenLLM-Ro/RoGemma2-9b-Instruct-2024-10-09) | |*RoGemma2-9b-Instruct-2025-04-23*| [link](https://huggingface.co/OpenLLM-Ro/RoGemma2-9b-Instruct-2024-10-09) | |RoGemma2-9b-Instruct-DPO-2024-10-09| [link](https://huggingface.co/OpenLLM-Ro/RoGemma2-9b-Instruct-DPO-2024-10-09) | |RoGemma2-9b-Instruct-DPO-2025-04-23| [link](https://huggingface.co/OpenLLM-Ro/RoGemma2-9b-Instruct-DPO-2024-10-09) | ## Citation ``` @misc{masala2024vorbecstiromanecsterecipetrain, title={"Vorbe\c{s}ti Rom\^ane\c{s}te?" A Recipe to Train Powerful Romanian LLMs with English Instructions}, author={Mihai Masala and Denis C. Ilie-Ablachim and Alexandru Dima and Dragos Corlatescu and Miruna Zavelca and Ovio Olaru and Simina Terian-Dan and Andrei Terian-Dan and Marius Leordeanu and Horia Velicu and Marius Popescu and Mihai Dascalu and Traian Rebedea}, year={2024}, eprint={2406.18266}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2406.18266}, } ``` <!-- **APA:** [More Information Needed] -->
hzzheng/InverseBench-NS2d-diffusion-prior
hzzheng
2025-04-24T18:25:31Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-04-24T18:25:19Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: https://github.com/devzhk/InverseBench - Paper: devzhk.github.io/InverseBench/ - Docs: [More Information Needed]
mlfoundations-dev/b2_code_fasttext_pos_ioi_neg_sql
mlfoundations-dev
2025-04-24T17:35:52Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T21:55:51Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: b2_code_fasttext_pos_ioi_neg_sql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # b2_code_fasttext_pos_ioi_neg_sql This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_code_fasttext_pos_ioi_neg_sql dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.1 - Datasets 3.0.2 - Tokenizers 0.20.3
mlfoundations-dev/b2_math_fasttext_pos_numina_neg_all_1k
mlfoundations-dev
2025-04-24T16:14:18Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-24T14:59:27Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: b2_math_fasttext_pos_numina_neg_all_1k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # b2_math_fasttext_pos_numina_neg_all_1k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_math_fasttext_pos_numina_neg_all_1k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 24 - total_train_batch_size: 96 - total_eval_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 7.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
marieroxanne/marieroxanne
marieroxanne
2025-04-24T11:56:16Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2025-04-24T11:56:16Z
--- license: bigcode-openrail-m ---
mradermacher/Bespoke-MiniChart-7B-GGUF
mradermacher
2025-04-24T11:31:20Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:bespokelabs/Bespoke-MiniChart-7B", "base_model:quantized:bespokelabs/Bespoke-MiniChart-7B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-24T11:00:58Z
--- base_model: bespokelabs/Bespoke-MiniChart-7B language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/bespokelabs/Bespoke-MiniChart-7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Bespoke-MiniChart-7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Bespoke-MiniChart-7B-GGUF/resolve/main/Bespoke-MiniChart-7B.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Bespoke-MiniChart-7B-GGUF/resolve/main/Bespoke-MiniChart-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Bespoke-MiniChart-7B-GGUF/resolve/main/Bespoke-MiniChart-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Bespoke-MiniChart-7B-GGUF/resolve/main/Bespoke-MiniChart-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Bespoke-MiniChart-7B-GGUF/resolve/main/Bespoke-MiniChart-7B.IQ4_XS.gguf) | IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Bespoke-MiniChart-7B-GGUF/resolve/main/Bespoke-MiniChart-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Bespoke-MiniChart-7B-GGUF/resolve/main/Bespoke-MiniChart-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Bespoke-MiniChart-7B-GGUF/resolve/main/Bespoke-MiniChart-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Bespoke-MiniChart-7B-GGUF/resolve/main/Bespoke-MiniChart-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Bespoke-MiniChart-7B-GGUF/resolve/main/Bespoke-MiniChart-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Bespoke-MiniChart-7B-GGUF/resolve/main/Bespoke-MiniChart-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Bespoke-MiniChart-7B-GGUF/resolve/main/Bespoke-MiniChart-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
annasoli/Qwen2.5-14B-Instruct-bad_medical_advice_R1_updownproj
annasoli
2025-04-24T10:20:00Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-14B-Instruct", "base_model:finetune:unsloth/Qwen2.5-14B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-24T10:19:55Z
--- base_model: unsloth/Qwen2.5-14B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** annasoli - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-14B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mlfoundations-dev/b2_math_random_10k
mlfoundations-dev
2025-04-24T09:42:47Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-24T01:01:32Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: b2_math_random_10k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # b2_math_random_10k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_math_random_10k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
datapaf/l3_8b_ru_instruct_reg_taiga64_ift
datapaf
2025-04-24T09:15:24Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-24T08:57:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Hartunka/bert_base_rand_20_v2_qnli
Hartunka
2025-04-24T08:54:41Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/bert_base_rand_20_v2", "base_model:finetune:Hartunka/bert_base_rand_20_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-24T08:33:38Z
--- library_name: transformers language: - en base_model: Hartunka/bert_base_rand_20_v2 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert_base_rand_20_v2_qnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE QNLI type: glue args: qnli metrics: - name: Accuracy type: accuracy value: 0.6364634816035145 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_base_rand_20_v2_qnli This model is a fine-tuned version of [Hartunka/bert_base_rand_20_v2](https://huggingface.co/Hartunka/bert_base_rand_20_v2) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6356 - Accuracy: 0.6365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.663 | 1.0 | 410 | 0.6428 | 0.6260 | | 0.6206 | 2.0 | 820 | 0.6356 | 0.6365 | | 0.5501 | 3.0 | 1230 | 0.6610 | 0.6343 | | 0.4468 | 4.0 | 1640 | 0.7094 | 0.6539 | | 0.3292 | 5.0 | 2050 | 0.8128 | 0.6531 | | 0.2319 | 6.0 | 2460 | 1.0192 | 0.6544 | | 0.1649 | 7.0 | 2870 | 1.1816 | 0.6504 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
7-REDEEM-CRAZE-VIRAL-VIDEO-CLIP/Original-Viral-Link.Redeem.Craze.Viral.Videos.Leaks.official
7-REDEEM-CRAZE-VIRAL-VIDEO-CLIP
2025-04-24T08:54:11Z
0
0
null
[ "region:us" ]
null
2025-04-24T08:52:16Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/2x869u6x?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> Christian Artist Forrest Frank Hits TikTok’s Top 50 Thanks to Dance Craze - Michael Foust A feel-good song by one of the top artists in Christian music is trending on TikTok -- and even has its Middleboro Café’s Viral Dance Craze Brews Up Millions on TikTok [VIDEO] A coffee shop in Middleboro, Coffee Milano Café, has captured TikTok's attention with a creative Minecraft Movie Madness: 'Chicken Jockey!' viral trend sparks chaos in theatre; craze forces police to interv
nis12ram/aya-expanse-8b-exp2-corr-label
nis12ram
2025-04-24T04:54:42Z
8
0
transformers
[ "transformers", "cohere", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:CohereLabs/aya-expanse-8b", "base_model:finetune:CohereLabs/aya-expanse-8b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-21T11:30:09Z
--- base_model: CohereLabs/aya-expanse-8b tags: - text-generation-inference - transformers - unsloth - cohere license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** nis12ram - **License:** apache-2.0 - **Finetuned from model :** CohereLabs/aya-expanse-8b This cohere model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
HanningZhang/Qwen2.5-Math-7B-raft-plusplus_cliphigher050_em-iter3
HanningZhang
2025-04-24T04:02:37Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-24T04:00:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
prithivMLmods/Deepthink-1.5B-Open-PRM
prithivMLmods
2025-04-24T00:12:38Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "PRM", "Code", "Math", "conversational", "en", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T12:35:12Z
--- library_name: transformers tags: - text-generation-inference - PRM - Code - Math license: apache-2.0 language: - en base_model: - Qwen/Qwen2.5-1.5B-Instruct pipeline_tag: text-generation --- ![PRM.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/2inJGKPx_BrMcID7Osto-.png) # **Deepthink-1.5B-Open-PRM** > **Deepthink-1.5B-Open-PRM** is a **process-supervised reasoning model** fine-tuned from **Qwen2.5 1.5B** using **Process Reward Models (PRM)**. It excels at **step-by-step mathematical problem solving** in both **English** and **Simplified Chinese**, offering interpretable, logically structured responses for use in **education**, **STEM tutoring**, and **lightweight math agents**. ## **Key Features** 1. **Process Reward Model Supervision (PRM)** Fine-tuned with PRMs to reward high-quality intermediate reasoning steps — fostering step-by-step interpretability, accuracy, and educational transparency. 2. **Compact Foundation (Qwen2.5 0.5B)** Built upon the highly efficient Qwen2.5 1.5B architecture and scaled up through distillation and reward-based alignment to 1.5B parameters, balancing reasoning quality and deployment efficiency. 3. **Bilingual Math Capability** Fluent in solving and explaining math problems in both **English** and **Simplified Chinese**, making it ideal for multilingual classrooms and tutoring platforms. 4. **Process-Supervised Math Reasoning** Trained to reason like a teacher — showing each logical step before delivering an answer. Ideal for learners who need to understand the “how” and “why” behind each solution. 5. **Long-Context & Word Problem Reasoning** Especially proficient with multi-step arithmetic, word problems, logic puzzles, and middle school to early college-level math. ## **Quickstart with Transformers** ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/Deepthink-1.5B-Open-PRM" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Solve: A tank can be filled by one pipe in 6 hours and emptied by another in 9 hours. How long will it take to fill the tank if both pipes are opened together?" messages = [ {"role": "system", "content": "You are a helpful math tutor who explains each step clearly."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## **Intended Use** - **Math Education Agents**: Tutors that explain problems step by step, helping users build understanding through reasoning. - **Bilingual Learning Platforms**: Apps that teach math in both Chinese and English. - **STEM-Oriented Assistants**: Supports early-stage problem solving in science and engineering contexts. - **Lightweight LLM Deployments**: Optimized for low-resource environments, from browsers to mobile devices. ## **Limitations** 1. **Domain Specificity** Primarily tuned for math reasoning — performance may degrade on unrelated tasks like creative writing or open dialogue. 2. **Model Size Constraint** While efficient, 1.5B parameters may struggle with highly abstract or very long multi-domain tasks. 3. **PRM Bias Generalization** PRM training can bias toward rewardable structures — results should still be reviewed for correctness and completeness. 4. **Prompt Structure Sensitivity** Well-structured queries yield more accurate and educationally useful outputs.
guzSp/guzor
guzSp
2025-04-23T23:13:12Z
0
0
null
[ "license:other", "region:us" ]
null
2025-04-23T22:26:42Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
luckeciano/Qwen-2.5-7B-RL-LACPO-2-1.5e-05-24
luckeciano
2025-04-23T13:30:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T23:10:00Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-RL-LACPO-2-1.5e-05-24 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-RL-LACPO-2-1.5e-05-24 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-RL-LACPO-2-1.5e-05-24", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/MaxEntLLMs/runs/vcqe2vt1) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
henryhe0123/pc-agent-test-32
henryhe0123
2025-04-23T11:34:52Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:henryhe0123/pc-agent-test-32", "base_model:finetune:henryhe0123/pc-agent-test-32", "license:other", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-04-23T06:14:53Z
--- library_name: transformers license: other base_model: henryhe0123/pc-agent-test-32 tags: - llama-factory - full - generated_from_trainer model-index: - name: Qwen2.5-VL-72B-sft-32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen2.5-VL-72B-sft-32 This model is a fine-tuned version of [/inspire/hdd/global_user/liupengfei-24025/yhhe/model/Qwen2.5-VL-72B-Instruct](https://huggingface.co//inspire/hdd/global_user/liupengfei-24025/yhhe/model/Qwen2.5-VL-72B-Instruct) on the pcagent32 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.49.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
andreaschari/mt5-ZH_MMARCO_TRANSLIT_ANSERINI
andreaschari
2025-04-23T10:19:23Z
0
0
null
[ "safetensors", "mt5", "zh", "dataset:unicamp-dl/mmarco", "base_model:unicamp-dl/mt5-base-mmarco-v2", "base_model:finetune:unicamp-dl/mt5-base-mmarco-v2", "license:mit", "region:us" ]
null
2025-04-23T10:17:18Z
--- license: mit datasets: - unicamp-dl/mmarco language: - zh base_model: - unicamp-dl/mt5-base-mmarco-v2 --- # mt5-base Reranker ZH mMARCO/v2 Transliterated Queries tokenised with Anserini This is a variation of Unicamp's [mt5-base Reranker](https://huggingface.co/unicamp-dl/mt5-base-mmarco-v2) initially finetuned on mMARCOv/2. The queries are transliterated from Chinese to English text using [uroman](https://github.com/isi-nlp/uroman). The queries were tokenised with [pyterrier_anserini](https://github.com/seanmacavaney/pyterrier-anserini/tree/main/pyterrier_anserini). The model was used for the SIGIR 2025 Short paper: Lost in Transliteration: Bridging the Script Gap in Neural IR.
LarryAIDraw/Zenless_Zone_Zero_Pack__Characters_and_Style__-_NatMontero
LarryAIDraw
2025-04-23T10:15:20Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-04-23T06:13:32Z
--- license: creativeml-openrail-m --- https://civitai.com/models/1099779/zenless-zone-zero-pack-characters-and-style-natmontero
FeeryJulia82103/dzavzdcvazs
FeeryJulia82103
2025-04-23T06:52:03Z
0
0
null
[ "license:cc-by-nc-2.0", "region:us" ]
null
2025-04-23T06:52:03Z
--- license: cc-by-nc-2.0 ---
abharadwaj123/skywork-3b-fine-tuned-length-1000-3
abharadwaj123
2025-04-23T06:43:48Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-23T06:43:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RosFiliber740/ghfghfgh
RosFiliber740
2025-04-22T10:50:20Z
0
0
null
[ "license:cc-by-nc-2.0", "region:us" ]
null
2025-04-22T10:50:20Z
--- license: cc-by-nc-2.0 ---
mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF
mradermacher
2025-04-22T04:00:22Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:ReadyArt/Omega-Darker_The-Final-Abomination-12B", "base_model:quantized:ReadyArt/Omega-Darker_The-Final-Abomination-12B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-04-21T21:20:16Z
--- base_model: ReadyArt/Omega-Darker_The-Final-Abomination-12B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/ReadyArt/Omega-Darker_The-Final-Abomination-12B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Omega-Darker_The-Final-Abomination-12B-i1-GGUF/resolve/main/Omega-Darker_The-Final-Abomination-12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
AdoCleanCode/general_COCO_cogvlm2_v2
AdoCleanCode
2025-04-22T03:15:48Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-21T20:34:33Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: general_COCO_cogvlm2_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # general_COCO_cogvlm2_v2 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8150 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.7982 | 1.0 | 5001 | 1.8714 | | 1.714 | 2.0 | 10002 | 1.8478 | | 1.7103 | 3.0 | 15003 | 1.8308 | | 1.6806 | 4.0 | 20004 | 1.8247 | | 1.6366 | 5.0 | 25005 | 1.8155 | | 1.6039 | 6.0 | 30006 | 1.8163 | | 1.5425 | 7.0 | 35007 | 1.8123 | | 1.5269 | 8.0 | 40008 | 1.8114 | | 1.5226 | 9.0 | 45009 | 1.8129 | | 1.5113 | 10.0 | 50010 | 1.8150 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.4.1+cu121 - Datasets 2.19.1 - Tokenizers 0.20.3
RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf
RichardErkhov
2025-04-21T16:16:53Z
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-21T08:32:21Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) mistral_7b_mlec - GGUF - Model creator: https://huggingface.co/bs100402963/ - Original model: https://huggingface.co/bs100402963/mistral_7b_mlec/ | Name | Quant method | Size | | ---- | ---- | ---- | | [mistral_7b_mlec.Q2_K.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q2_K.gguf) | Q2_K | 2.53GB | | [mistral_7b_mlec.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [mistral_7b_mlec.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.IQ3_S.gguf) | IQ3_S | 2.96GB | | [mistral_7b_mlec.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [mistral_7b_mlec.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.IQ3_M.gguf) | IQ3_M | 3.06GB | | [mistral_7b_mlec.Q3_K.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q3_K.gguf) | Q3_K | 3.28GB | | [mistral_7b_mlec.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [mistral_7b_mlec.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [mistral_7b_mlec.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [mistral_7b_mlec.Q4_0.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q4_0.gguf) | Q4_0 | 3.83GB | | [mistral_7b_mlec.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [mistral_7b_mlec.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [mistral_7b_mlec.Q4_K.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q4_K.gguf) | Q4_K | 4.07GB | | [mistral_7b_mlec.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [mistral_7b_mlec.Q4_1.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q4_1.gguf) | Q4_1 | 4.24GB | | [mistral_7b_mlec.Q5_0.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q5_0.gguf) | Q5_0 | 4.65GB | | [mistral_7b_mlec.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [mistral_7b_mlec.Q5_K.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q5_K.gguf) | Q5_K | 4.78GB | | [mistral_7b_mlec.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [mistral_7b_mlec.Q5_1.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q5_1.gguf) | Q5_1 | 5.07GB | | [mistral_7b_mlec.Q6_K.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q6_K.gguf) | Q6_K | 5.53GB | | [mistral_7b_mlec.Q8_0.gguf](https://huggingface.co/RichardErkhov/bs100402963_-_mistral_7b_mlec-gguf/blob/main/mistral_7b_mlec.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CoachShenix/Aminatfun_model
CoachShenix
2025-04-19T13:39:25Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-19T13:39:15Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** CoachShenix - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)