modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-29 06:27:49
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
502 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-29 06:23:06
card
stringlengths
11
1.01M
Bob490/Larry
Bob490
2025-05-05T02:47:35Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-05T02:47:35Z
--- license: apache-2.0 ---
Membersuger/Euro_55
Membersuger
2025-05-05T02:43:19Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T08:54:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AnonymousCS/llama-3.1-8B-populism-french
AnonymousCS
2025-05-05T02:42:31Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T01:59:11Z
--- base_model: meta-llama/Llama-3.1-8B-Instruct library_name: transformers model_name: llama-3.1-8B-populism-french tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for llama-3.1-8B-populism-french This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AnonymousCS/llama-3.1-8B-populism-french", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/cecilia-y-sui-washington-unviersity-st-louis/huggingface/runs/g09pyxfv) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.52.0.dev0 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
sdgsjlfnjkl/kanana-2.1b-full-v12
sdgsjlfnjkl
2025-05-05T02:40:58Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T02:35:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CompassioninMachineLearning/May3_10k_four_fifths_animals_PLORA_plus100
CompassioninMachineLearning
2025-05-05T02:40:14Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T02:29:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kostiantynk1205/e7ceb282-612f-445b-b221-cac451db13e8
kostiantynk1205
2025-05-05T02:34:32Z
0
0
transformers
[ "transformers", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2025-05-05T02:34:12Z
--- library_name: transformers model_name: kostiantynk1205/e7ceb282-612f-445b-b221-cac451db13e8 tags: - generated_from_trainer licence: license --- # Model Card for kostiantynk1205/e7ceb282-612f-445b-b221-cac451db13e8 This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Piguyraspberry/ppo-LunarLander-v2
Piguyraspberry
2025-05-05T02:33:34Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-05-05T02:28:09Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -152.28 +/- 58.19 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
kika2000/gemma-3-12b-it-unsloth-bnb-4bit
kika2000
2025-05-05T02:31:46Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "unsloth", "gemma3", "gemma", "google", "conversational", "en", "arxiv:1905.07830", "arxiv:1905.10044", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1705.03551", "arxiv:1911.01547", "arxiv:1907.10641", "arxiv:1903.00161", "arxiv:2009.03300", "arxiv:2304.06364", "arxiv:2103.03874", "arxiv:2110.14168", "arxiv:2311.12022", "arxiv:2108.07732", "arxiv:2107.03374", "arxiv:2210.03057", "arxiv:2106.03193", "arxiv:1910.11856", "arxiv:2502.12404", "arxiv:2502.21228", "arxiv:2404.16816", "arxiv:2104.12756", "arxiv:2311.16502", "arxiv:2203.10244", "arxiv:2404.12390", "arxiv:1810.12440", "arxiv:1908.02660", "arxiv:2312.11805", "base_model:google/gemma-3-12b-it", "base_model:quantized:google/gemma-3-12b-it", "license:gemma", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-04T06:55:51Z
--- base_model: google/gemma-3-12b-it language: - en library_name: transformers license: gemma tags: - unsloth - transformers - gemma3 - gemma - google --- <div> <p style="margin-bottom: 0; margin-top: 0;"> <strong>See <a href="https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b">our collection</a> for all versions of Gemma 3 including GGUF, 4-bit & 16-bit formats.</strong> </p> <p style="margin-bottom: 0;"> <em>Unsloth's <a href="https://unsloth.ai/blog/deepseekr1-dynamic">Dynamic Quants</a> is selectively quantized, greatly improving accuracy over standard 4-bit.</em> </p> <div style="display: flex; gap: 5px; align-items: center; "> <a href="https://github.com/unslothai/unsloth/"> <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133"> </a> <a href="https://discord.gg/unsloth"> <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173"> </a> <a href="https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-r1-on-your-own-local-device"> <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143"> </a> </div> <h1 style="margin-top: 0rem;">✨ Fine-tune Gemma 3 with Unsloth!</h1> </div> - Fine-tune Gemma 3 (12B) for free using our Google [Colab notebook here](https://docs.unsloth.ai/get-started/unsloth-notebooks)! - Read our Blog about Gemma 3 support: [unsloth.ai/blog/gemma3](https://unsloth.ai/blog/gemma3) - View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks). - Export your fine-tuned model to GGUF, Ollama, llama.cpp or 🤗HF. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **GRPO with Gemma 3 (12B)** | [▶️ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 2x faster | 80% less | | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less | | **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less | | **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less | | **Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less | <br> # Gemma 3 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core) **Resources and Technical Documentation**: * [Gemma 3 Technical Report][g3-tech-report] * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma3] **Terms of Use**: [Terms][terms] **Authors**: Google DeepMind ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Inputs and outputs - **Input:** - Text string, such as a question, a prompt, or a document to be summarized - Images, normalized to 896 x 896 resolution and encoded to 256 tokens each - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and 32K tokens for the 1B size - **Output:** - Generated text in response to the input, such as an answer to a question, analysis of image content, or a summary of a document - Total output context of 8192 tokens ### Citation ```none @article{gemma_2025, title={Gemma 3}, url={https://goo.gle/Gemma3Report}, publisher={Kaggle}, author={Gemma Team}, year={2025} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 14 trillion tokens, the 12B model was trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and 1B with 2 trillion tokens. Here are the key components: - Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. The training dataset includes content in over 140 languages. - Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code and understand code-related questions. - Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. - Images: A wide range of images enables the model to perform image analysis and visual data extraction tasks. The combination of these diverse data sources is crucial for training a powerful multimodal model that can handle a wide variety of different tasks and data formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. - Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. - Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p, TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: - Performance: TPUs are specifically designed to handle the massive computations involved in training VLMs. They can speed up training considerably compared to CPUs. - Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. - Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. - Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. - These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for foundation models, including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; *"the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow."* ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: #### Reasoning and factuality | Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:| | [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 | | [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 | | [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 | | [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 | | [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 | | [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 | | [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 | | [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 | | [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 | | [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 | | [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 | [hellaswag]: https://arxiv.org/abs/1905.07830 [boolq]: https://arxiv.org/abs/1905.10044 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [arc]: https://arxiv.org/abs/1911.01547 [winogrande]: https://arxiv.org/abs/1907.10641 [bbh]: https://paperswithcode.com/dataset/bbh [drop]: https://arxiv.org/abs/1903.00161 #### STEM and code | Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:-------------:|:--------------:|:--------------:| | [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 | | [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 | | [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 | | [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 | | [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 | | [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 | | [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 | | [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 | [mmlu]: https://arxiv.org/abs/2009.03300 [agieval]: https://arxiv.org/abs/2304.06364 [math]: https://arxiv.org/abs/2103.03874 [gsm8k]: https://arxiv.org/abs/2110.14168 [gpqa]: https://arxiv.org/abs/2311.12022 [mbpp]: https://arxiv.org/abs/2108.07732 [humaneval]: https://arxiv.org/abs/2107.03374 #### Multilingual | Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:| | [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 | | [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 | | [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 | | [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 | | [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 | | [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 | | [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 | [mgsm]: https://arxiv.org/abs/2210.03057 [flores]: https://arxiv.org/abs/2106.03193 [xquad]: https://arxiv.org/abs/1910.11856v3 [global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite [wmt24pp]: https://arxiv.org/abs/2502.12404v1 [eclektic]: https://arxiv.org/abs/2502.21228 [indicgenbench]: https://arxiv.org/abs/2404.16816 #### Multimodal | Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |:-------------:|:--------------:|:--------------:| | [COCOcap][coco-cap] | 102 | 111 | 116 | | [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 | | [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 | | [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 | | [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 | | [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 | | [ReMI][remi] | 27.3 | 38.5 | 44.8 | | [AI2D][ai2d] | 63.2 | 75.2 | 79.0 | | [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 | | [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 | | [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 | | [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 | | [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 | | [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 | | [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 | [coco-cap]: https://cocodataset.org/#home [docvqa]: https://www.docvqa.org/ [info-vqa]: https://arxiv.org/abs/2104.12756 [mmmu]: https://arxiv.org/abs/2311.16502 [textvqa]: https://textvqa.org/ [realworldqa]: https://paperswithcode.com/dataset/realworldqa [remi]: https://arxiv.org/html/2406.09175v1 [ai2d]: https://allenai.org/data/diagrams [chartqa]: https://arxiv.org/abs/2203.10244 [vqav2]: https://visualqa.org/index.html [blinkvqa]: https://arxiv.org/abs/2404.12390 [okvqa]: https://okvqa.allenai.org/ [tallyqa]: https://arxiv.org/abs/1810.12440 [ss-vqa]: https://arxiv.org/abs/1908.02660 [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/ ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: - **Child Safety**: Evaluation of text-to-text and image to text prompts covering child safety policies, including child sexual abuse and exploitation. - **Content Safety:** Evaluation of text-to-text and image to text prompts covering safety policies including, harassment, violence and gore, and hate speech. - **Representational Harms**: Evaluation of text-to-text and image to text prompts covering safety policies including bias, stereotyping, and harmful associations or inaccuracies. In addition to development level evaluations, we conduct "assurance evaluations" which are our 'arms-length' internal evaluations for responsibility governance decision making. They are conducted separately from the model development team, to inform decision making about release. High level findings are fed back to the model team, but prompt sets are held-out to prevent overfitting and preserve the results' ability to inform decision making. Assurance evaluation results are reported to our Responsibility & Safety Council as part of release review. ### Evaluation Results For all areas of safety testing, we saw major improvements in the categories of child safety, content safety, and representational harms relative to previous Gemma models. All testing was conducted without safety filters to evaluate the model capabilities and behaviors. For both text-to-text and image-to-text, and across all model sizes, the model produced minimal policy violations, and showed significant improvements over previous Gemma models' performance with respect to ungrounded inferences. A limitation of our evaluations was they included only English language prompts. ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open vision-language models (VLMs) models have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. - Content Creation and Communication - Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. - Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. - Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. - Image Data Extraction: These models can be used to extract, interpret, and summarize visual data for text communications. - Research and Education - Natural Language Processing (NLP) and VLM Research: These models can serve as a foundation for researchers to experiment with VLM and NLP techniques, develop algorithms, and contribute to the advancement of the field. - Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. - Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations - Training Data - The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. - The scope of the training dataset determines the subject areas the model can handle effectively. - Context and Task Complexity - Models are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. - A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). - Language Ambiguity and Nuance - Natural language is inherently complex. Models might struggle to grasp subtle nuances, sarcasm, or figurative language. - Factual Accuracy - Models generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. - Common Sense - Models rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: - Bias and Fairness - VLMs trained on large-scale, real-world text and image data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. - Misinformation and Misuse - VLMs can be misused to generate text that is false, misleading, or harmful. - Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. - Transparency and Accountability: - This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. - A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: - **Perpetuation of biases**: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. - **Generation of harmful content**: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. - **Misuse for malicious purposes**: Technical limitations and developer and end-user education can help mitigate against malicious applications of VLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. - **Privacy violations**: Models were trained on data filtered for removal of certain personal information and other sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open vision-language model implementations designed from the ground up for responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [g3-tech-report]: https://goo.gle/Gemma3Report [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3 [vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3 [terms]: https://ai.google.dev/gemma/terms [safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/jax-ml/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [gemini-2-paper]: https://arxiv.org/abs/2312.11805
aipib/Florence-2-VQA_OCRJP
aipib
2025-05-05T02:30:47Z
0
0
transformers
[ "transformers", "safetensors", "florence2", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2025-05-05T02:29:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MuXodious/BlueLight-12B_EXL2_6.0bpw
MuXodious
2025-05-05T02:30:42Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "chatml", "conversational", "en", "ja", "arxiv:2403.19522", "base_model:yamatazen/BlueLight-12B", "base_model:quantized:yamatazen/BlueLight-12B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
2025-05-05T00:50:28Z
--- base_model: yamatazen/BlueLight-12B base_model_relation: quantized library_name: transformers tags: - mergekit - merge - chatml language: - en - ja --- ![png/image](https://huggingface.co/yamatazen/BlueLight-12B/resolve/main/BlueLight-12B.png?download=true) This is a Mistral model with ChatML tokens added to the tokenizer. # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [nbeerbower/mistral-nemo-gutenberg-12B-v4](https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B-v4) as a base. ### Models Merged The following models were included in the merge: * [yamatazen/HMS-Slerp-12B](https://huggingface.co/yamatazen/HMS-Slerp-12B) * [yamatazen/LoyalMaid-12B](https://huggingface.co/yamatazen/LoyalMaid-12B) * [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1) * [PocketDoc/Dans-PersonalityEngine-V1.1.0-12b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.1.0-12b) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: nbeerbower/mistral-nemo-gutenberg-12B-v4 models: - model: yamatazen/HMS-Slerp-12B - model: yamatazen/LoyalMaid-12B - model: inflatebot/MN-12B-Mag-Mell-R1 - model: PocketDoc/Dans-PersonalityEngine-V1.1.0-12b merge_method: model_stock dtype: bfloat16 parameters: normalize: true tokenizer: source: union ```
penelitianpsmatematika/medical-text-generation-t5-small-v3
penelitianpsmatematika
2025-05-05T02:27:04Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-05-05T02:26:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rishi336/Phi-3-mini-4k-instruct-Medical-Reasoning
rishi336
2025-05-05T02:25:56Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-05T02:25:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Syldehayem/all-MiniLM-L6-v2_embedder
Syldehayem
2025-05-05T02:23:56Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:9712", "loss:TripletLoss", "arxiv:1908.10084", "arxiv:1703.07737", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-05-05T00:05:30Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:9712 - loss:TripletLoss base_model: sentence-transformers/all-MiniLM-L6-v2 widget: - source_sentence: CGI VFX Breakdowns "El Principe Season 1" - by Stargate Studios Malta sentences: - Best of 2013! - 'কাজকর্ম ফেলে ছেলে নিয়ে পড়ে থাকলে হবে | Baro Bou | #shorts | #banglacinema' - CG animation on social anxiety | "Subconcious Password" - by Chris Landreth (Oscar-winner) - source_sentence: Award-Winning Stop-Motion Animation Short Film | HEATWAVE sentences: - Natun Diner Alo - Bengali Full Movie | Soumitra Chatterjee | Sabitri Chatterjee - Funny CG short film on Martin Luther and the Reformation | "Luther" - by Tumblehead - 'Serbian Dancing Lady made into a film #horrorstory #shorts #horrorstories' - source_sentence: 'MotionBuilder Speed Tutorial: How to add Alpha Maps to objects and see it your viewport.(Basic)' sentences: - Animated short film about anonymity and small encounters | "Through You" - by Lucette Braune - Animated short film on parental pressure | "Matilda and the Spare Head" - by Ignas Meilūnas - '📽️ Vertical Short: "Course of Nature" - by Lucy Xue and Paisley Manga | #TheCGBros' - source_sentence: Mriter Marte Agaman - Bengali Full Movie | Bhanu Bandopadhyay | Jahor Roy sentences: - CGI VFX Breakdowns HD "Labanita 3D Breakdown" by Monkeys | CGMeetup - 'CGI VFX Spot : "Network of the Future" by - MPC' - Writing a Story Around a Shot Idea & The Best Part About Filmmaking - source_sentence: '**Award Winning** CGI 3D Animated Short: "Monsters In The Dark" - by Apollonia Thomaier | TheCGBros' sentences: - Nayantara | নয়নতারা | Family Movie | Full HD | Saswata Chatterjee, Soumitra, Mamata Shankar - Gajamukta - Bengali Full Movie | Moon Moon Sen | Abhishek Chatterjee | Soumitra Chatterjee - Sci-Fi Short Film "In Sight Sci-Fi Short Film" by ArtFx | CGMeetup pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Syldehayem/all-MiniLM-L6-v2_embedder") # Run inference sentences = [ '**Award Winning** CGI 3D Animated Short: "Monsters In The Dark" - by Apollonia Thomaier | TheCGBros', 'Gajamukta - Bengali Full Movie | Moon Moon Sen | Abhishek Chatterjee | Soumitra Chatterjee', 'Nayantara | নয়নতারা | Family Movie | Full HD | Saswata Chatterjee, Soumitra, Mamata Shankar', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 9,712 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | sentence_2 | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 19.73 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 20.14 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 20.23 tokens</li><li>max: 66 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | sentence_2 | |:-------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------| | <code>D.A.D. (Sci-Fi Short Film) | Dad just got an upgrade</code> | <code>Preservation Clip</code> | <code>A man's life is ruined by his sexist auto-correct text messages. | Short Film "Auto-Cowrecked"</code> | | <code>WATCH Unknown Caller Short Film | LINK BELOW #shorts</code> | <code>CGI VFX Short Spot : "Chalet" by - Counterfeit FX</code> | <code>CGI 3D VFX Short : "Zumtobel" by - Trizz</code> | | <code>Pratibha | প্রতিভা | Bengali Romantic Movie | Full HD | Ranjit Mallick, Satabdi Roy</code> | <code>Sci-Fi Series "ATROPA" Episode 5 | DUST</code> | <code>CGI 3D Animated Short: "Glitch" - by ESMA | TheCGBros</code> | * Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters: ```json { "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 5 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 100 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 100 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | |:-------:|:-----:|:-------------:| | 0.8237 | 500 | 5.0006 | | 1.6474 | 1000 | 4.9915 | | 2.4712 | 1500 | 4.96 | | 3.2949 | 2000 | 4.9266 | | 4.1186 | 2500 | 4.8689 | | 4.9423 | 3000 | 4.8158 | | 5.7661 | 3500 | 4.7408 | | 6.5898 | 4000 | 4.702 | | 7.4135 | 4500 | 4.6564 | | 8.2372 | 5000 | 4.63 | | 9.0610 | 5500 | 4.6119 | | 9.8847 | 6000 | 4.5983 | | 0.8237 | 500 | 4.6071 | | 1.6474 | 1000 | 4.6401 | | 2.4712 | 1500 | 4.6525 | | 3.2949 | 2000 | 4.6101 | | 4.1186 | 2500 | 4.5926 | | 4.9423 | 3000 | 4.5827 | | 5.7661 | 3500 | 4.5096 | | 6.5898 | 4000 | 4.5171 | | 7.4135 | 4500 | 4.507 | | 8.2372 | 5000 | 4.4738 | | 9.0610 | 5500 | 4.4973 | | 9.8847 | 6000 | 4.4485 | | 0.8237 | 500 | 4.4222 | | 1.6474 | 1000 | 4.3984 | | 2.4712 | 1500 | 4.4144 | | 3.2949 | 2000 | 4.4117 | | 4.1186 | 2500 | 4.4042 | | 4.9423 | 3000 | 4.4136 | | 5.7661 | 3500 | 4.4055 | | 6.5898 | 4000 | 4.4267 | | 7.4135 | 4500 | 4.4548 | | 8.2372 | 5000 | 4.4443 | | 9.0610 | 5500 | 4.4649 | | 9.8847 | 6000 | 4.4463 | | 10.7084 | 6500 | 4.4771 | | 11.5321 | 7000 | 4.4691 | | 12.3558 | 7500 | 4.4817 | | 13.1796 | 8000 | 4.4505 | | 14.0033 | 8500 | 4.4355 | | 14.8270 | 9000 | 4.4145 | | 15.6507 | 9500 | 4.4128 | | 16.4745 | 10000 | 4.3874 | | 17.2982 | 10500 | 4.4057 | | 18.1219 | 11000 | 4.3841 | | 18.9456 | 11500 | 4.3836 | | 19.7694 | 12000 | 4.3554 | | 20.5931 | 12500 | 4.3445 | | 21.4168 | 13000 | 4.3351 | | 22.2405 | 13500 | 4.3602 | | 23.0643 | 14000 | 4.3366 | | 23.8880 | 14500 | 4.3302 | | 24.7117 | 15000 | 4.3531 | | 25.5354 | 15500 | 4.3002 | | 26.3591 | 16000 | 4.3499 | | 27.1829 | 16500 | 4.3049 | | 28.0066 | 17000 | 4.3039 | | 28.8303 | 17500 | 4.3045 | | 29.6540 | 18000 | 4.2969 | | 30.4778 | 18500 | 4.2831 | | 31.3015 | 19000 | 4.2999 | | 32.1252 | 19500 | 4.3037 | | 32.9489 | 20000 | 4.2768 | | 33.7727 | 20500 | 4.2928 | | 34.5964 | 21000 | 4.2697 | | 35.4201 | 21500 | 4.2985 | | 36.2438 | 22000 | 4.2799 | | 37.0675 | 22500 | 4.286 | | 37.8913 | 23000 | 4.2671 | | 38.7150 | 23500 | 4.2775 | | 39.5387 | 24000 | 4.2872 | | 40.3624 | 24500 | 4.2687 | | 41.1862 | 25000 | 4.2555 | | 42.0099 | 25500 | 4.2661 | | 42.8336 | 26000 | 4.2737 | | 43.6573 | 26500 | 4.2476 | | 44.4811 | 27000 | 4.2347 | | 45.3048 | 27500 | 4.2381 | | 46.1285 | 28000 | 4.2533 | | 46.9522 | 28500 | 4.2295 | | 47.7759 | 29000 | 4.2346 | | 48.5997 | 29500 | 4.2411 | | 49.4234 | 30000 | 4.2347 | | 50.2471 | 30500 | 4.232 | | 51.0708 | 31000 | 4.2409 | | 51.8946 | 31500 | 4.2219 | | 52.7183 | 32000 | 4.2284 | | 53.5420 | 32500 | 4.2396 | | 54.3657 | 33000 | 4.2199 | | 55.1895 | 33500 | 4.2198 | | 56.0132 | 34000 | 4.1958 | | 56.8369 | 34500 | 4.2034 | | 57.6606 | 35000 | 4.1931 | | 58.4843 | 35500 | 4.2292 | | 59.3081 | 36000 | 4.197 | | 60.1318 | 36500 | 4.2365 | | 60.9555 | 37000 | 4.1939 | | 61.7792 | 37500 | 4.2045 | | 62.6030 | 38000 | 4.2037 | | 63.4267 | 38500 | 4.2007 | | 64.2504 | 39000 | 4.2025 | | 65.0741 | 39500 | 4.1846 | | 65.8979 | 40000 | 4.1812 | | 66.7216 | 40500 | 4.2022 | | 67.5453 | 41000 | 4.1955 | | 68.3690 | 41500 | 4.1834 | | 69.1928 | 42000 | 4.1838 | | 70.0165 | 42500 | 4.1908 | | 70.8402 | 43000 | 4.1821 | | 71.6639 | 43500 | 4.1636 | | 72.4876 | 44000 | 4.1868 | | 73.3114 | 44500 | 4.1737 | | 74.1351 | 45000 | 4.1802 | | 74.9588 | 45500 | 4.1744 | | 75.7825 | 46000 | 4.1688 | | 76.6063 | 46500 | 4.1664 | | 77.4300 | 47000 | 4.1627 | | 78.2537 | 47500 | 4.1561 | | 79.0774 | 48000 | 4.1699 | | 79.9012 | 48500 | 4.1679 | | 80.7249 | 49000 | 4.1579 | | 81.5486 | 49500 | 4.1502 | | 82.3723 | 50000 | 4.1613 | | 83.1960 | 50500 | 4.1342 | | 84.0198 | 51000 | 4.1659 | | 84.8435 | 51500 | 4.1484 | | 85.6672 | 52000 | 4.1563 | | 86.4909 | 52500 | 4.1551 | | 87.3147 | 53000 | 4.1519 | | 88.1384 | 53500 | 4.1486 | | 88.9621 | 54000 | 4.1532 | | 89.7858 | 54500 | 4.1506 | | 90.6096 | 55000 | 4.1397 | | 91.4333 | 55500 | 4.1589 | | 92.2570 | 56000 | 4.1213 | | 93.0807 | 56500 | 4.1466 | | 93.9044 | 57000 | 4.1496 | | 94.7282 | 57500 | 4.1416 | | 95.5519 | 58000 | 4.1427 | | 96.3756 | 58500 | 4.133 | | 97.1993 | 59000 | 4.1505 | | 98.0231 | 59500 | 4.1342 | | 98.8468 | 60000 | 4.133 | | 99.6705 | 60500 | 4.151 | </details> ### Framework Versions - Python: 3.12.9 - Sentence Transformers: 4.1.0 - Transformers: 4.51.3 - PyTorch: 2.7.0+cu126 - Accelerate: 1.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### TripletLoss ```bibtex @misc{hermans2017defense, title={In Defense of the Triplet Loss for Person Re-Identification}, author={Alexander Hermans and Lucas Beyer and Bastian Leibe}, year={2017}, eprint={1703.07737}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf
RichardErkhov
2025-05-05T02:23:54Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-04T23:29:19Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) IE_L3_1000steps_1e6rate_03beta_cSFTDPO - GGUF - Model creator: https://huggingface.co/tsavage68/ - Original model: https://huggingface.co/tsavage68/IE_L3_1000steps_1e6rate_03beta_cSFTDPO/ | Name | Quant method | Size | | ---- | ---- | ---- | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q2_K.gguf) | Q2_K | 2.96GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.IQ3_S.gguf) | IQ3_S | 3.43GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.IQ3_M.gguf) | IQ3_M | 3.52GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q3_K.gguf) | Q3_K | 3.74GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q4_0.gguf) | Q4_0 | 4.34GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q4_K.gguf) | Q4_K | 4.58GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q4_1.gguf) | Q4_1 | 4.78GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q5_0.gguf) | Q5_0 | 5.21GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q5_K.gguf) | Q5_K | 5.34GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q5_1.gguf) | Q5_1 | 5.65GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q6_K.gguf) | Q6_K | 6.14GB | | [IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_03beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_03beta_cSFTDPO.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers license: llama3 base_model: tsavage68/IE_L3_1000steps_1e6rate_SFT tags: - trl - dpo - generated_from_trainer model-index: - name: IE_L3_1000steps_1e6rate_03beta_cSFTDPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IE_L3_1000steps_1e6rate_03beta_cSFTDPO This model is a fine-tuned version of [tsavage68/IE_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/IE_L3_1000steps_1e6rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1802 - Rewards/chosen: -1.3199 - Rewards/rejected: -13.3530 - Rewards/accuracies: 0.7400 - Rewards/margins: 12.0331 - Logps/rejected: -120.1372 - Logps/chosen: -87.1973 - Logits/rejected: -0.8052 - Logits/chosen: -0.7124 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.1907 | 0.4 | 50 | 0.1802 | -1.0923 | -10.4680 | 0.7400 | 9.3757 | -110.5205 | -86.4386 | -0.7963 | -0.7114 | | 0.1386 | 0.8 | 100 | 0.1802 | -1.2190 | -11.5716 | 0.7400 | 10.3526 | -114.1993 | -86.8611 | -0.7960 | -0.7088 | | 0.1386 | 1.2 | 150 | 0.1802 | -1.2269 | -11.8797 | 0.7400 | 10.6528 | -115.2263 | -86.8875 | -0.7973 | -0.7092 | | 0.1733 | 1.6 | 200 | 0.1802 | -1.2628 | -12.4562 | 0.7400 | 11.1934 | -117.1479 | -87.0072 | -0.7983 | -0.7088 | | 0.2253 | 2.0 | 250 | 0.1802 | -1.2811 | -12.6109 | 0.7400 | 11.3298 | -117.6637 | -87.0682 | -0.8005 | -0.7100 | | 0.1386 | 2.4 | 300 | 0.1802 | -1.2819 | -12.6821 | 0.7400 | 11.4002 | -117.9011 | -87.0709 | -0.8009 | -0.7104 | | 0.1213 | 2.8 | 350 | 0.1802 | -1.2857 | -12.9252 | 0.7400 | 11.6395 | -118.7114 | -87.0834 | -0.8024 | -0.7110 | | 0.1906 | 3.2 | 400 | 0.1802 | -1.2904 | -12.9929 | 0.7400 | 11.7024 | -118.9368 | -87.0992 | -0.8026 | -0.7109 | | 0.1906 | 3.6 | 450 | 0.1802 | -1.2935 | -13.0320 | 0.7400 | 11.7385 | -119.0673 | -87.1095 | -0.8030 | -0.7112 | | 0.2079 | 4.0 | 500 | 0.1802 | -1.3034 | -13.1728 | 0.7400 | 11.8694 | -119.5364 | -87.1423 | -0.8047 | -0.7126 | | 0.156 | 4.4 | 550 | 0.1802 | -1.3085 | -13.2242 | 0.7400 | 11.9157 | -119.7078 | -87.1593 | -0.8035 | -0.7118 | | 0.1213 | 4.8 | 600 | 0.1802 | -1.2992 | -13.2411 | 0.7400 | 11.9418 | -119.7642 | -87.1285 | -0.8054 | -0.7131 | | 0.1906 | 5.2 | 650 | 0.1802 | -1.3144 | -13.3156 | 0.7400 | 12.0011 | -120.0125 | -87.1792 | -0.8048 | -0.7117 | | 0.2426 | 5.6 | 700 | 0.1802 | -1.2925 | -13.3031 | 0.7400 | 12.0106 | -119.9710 | -87.1061 | -0.8043 | -0.7117 | | 0.2599 | 6.0 | 750 | 0.1802 | -1.3084 | -13.3298 | 0.7400 | 12.0213 | -120.0597 | -87.1592 | -0.8052 | -0.7126 | | 0.1213 | 6.4 | 800 | 0.1802 | -1.3118 | -13.3477 | 0.7400 | 12.0359 | -120.1197 | -87.1704 | -0.8039 | -0.7116 | | 0.2426 | 6.8 | 850 | 0.1802 | -1.3228 | -13.3620 | 0.7400 | 12.0392 | -120.1673 | -87.2071 | -0.8052 | -0.7125 | | 0.1733 | 7.2 | 900 | 0.1802 | -1.3137 | -13.3379 | 0.7400 | 12.0242 | -120.0870 | -87.1768 | -0.8052 | -0.7125 | | 0.1386 | 7.6 | 950 | 0.1802 | -1.3070 | -13.3530 | 0.7400 | 12.0460 | -120.1374 | -87.1545 | -0.8053 | -0.7127 | | 0.156 | 8.0 | 1000 | 0.1802 | -1.3199 | -13.3530 | 0.7400 | 12.0331 | -120.1372 | -87.1973 | -0.8052 | -0.7124 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.0.0+cu117 - Datasets 3.0.0 - Tokenizers 0.19.1
xinhai342/lora-trained-cat
xinhai342
2025-05-05T02:23:06Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-05-05T01:51:25Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: A brown cat is crouching on the ground. widget: - text: A brown cat is crouching on the ground. output: url: image_0.png - text: A brown cat is crouching on the ground. output: url: image_1.png - text: A brown cat is crouching on the ground. output: url: image_2.png - text: A brown cat is crouching on the ground. output: url: image_3.png tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - xinhai342/lora-trained-cat <Gallery /> ## Model description These are xinhai342/lora-trained-cat LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use A brown cat is crouching on the ground. to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](xinhai342/lora-trained-cat/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
hardlyworking/Secret4B-Q6_K-GGUF
hardlyworking
2025-05-05T02:22:11Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "axolotl", "trl", "kto", "llama-cpp", "gguf-my-repo", "base_model:hardlyworking/Secret4B", "base_model:quantized:hardlyworking/Secret4B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-05T02:21:51Z
--- base_model: hardlyworking/Secret4B library_name: transformers model_name: Secret4B tags: - generated_from_trainer - axolotl - trl - kto - llama-cpp - gguf-my-repo licence: license --- # hardlyworking/Secret4B-Q6_K-GGUF This model was converted to GGUF format from [`hardlyworking/Secret4B`](https://huggingface.co/hardlyworking/Secret4B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/hardlyworking/Secret4B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo hardlyworking/Secret4B-Q6_K-GGUF --hf-file secret4b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo hardlyworking/Secret4B-Q6_K-GGUF --hf-file secret4b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo hardlyworking/Secret4B-Q6_K-GGUF --hf-file secret4b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo hardlyworking/Secret4B-Q6_K-GGUF --hf-file secret4b-q6_k.gguf -c 2048 ```
geetach/legal-ft-a201f63a-cb7a-4d10-aa78-6229827dff89
geetach
2025-05-05T02:20:37Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:156", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:Snowflake/snowflake-arctic-embed-l", "base_model:finetune:Snowflake/snowflake-arctic-embed-l", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-05-05T02:19:31Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:156 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: Snowflake/snowflake-arctic-embed-l widget: - source_sentence: 'What are the numerical values associated with the tags "ai" and "generative-ai" in the context? ' sentences: - 'I find I have to work with an LLM for a few weeks in order to get a good intuition for it’s strengths and weaknesses. This greatly limits how many I can evaluate myself! The most frustrating thing for me is at the level of individual prompting. Sometimes I’ll tweak a prompt and capitalize some of the words in it, to emphasize that I really want it to OUTPUT VALID MARKDOWN or similar. Did capitalizing those words make a difference? I still don’t have a good methodology for figuring that out. We’re left with what’s effectively Vibes Based Development. It’s vibes all the way down. I’d love to see us move beyond vibes in 2024! LLMs are really smart, and also really, really dumb' - "blogging\n 105\n\n\n ai\n 1260\n\n\n \ \ generative-ai\n 1087\n\n\n llms\n 1074\n\ \nNext: Tom Scott, and the formidable power of escalating streaks\nPrevious: Last\ \ weeknotes of 2023\n\n\n \n \n\n\nColophon\n©\n2002\n2003\n2004\n2005\n2006\n\ 2007\n2008\n2009\n2010\n2011\n2012\n2013\n2014\n2015\n2016\n2017\n2018\n2019\n\ 2020\n2021\n2022\n2023\n2024\n2025" - "blogging\n 105\n\n\n ai\n 1260\n\n\n \ \ generative-ai\n 1087\n\n\n llms\n 1074\n\ \nNext: Tom Scott, and the formidable power of escalating streaks\nPrevious: Last\ \ weeknotes of 2023\n\n\n \n \n\n\nColophon\n©\n2002\n2003\n2004\n2005\n2006\n\ 2007\n2008\n2009\n2010\n2011\n2012\n2013\n2014\n2015\n2016\n2017\n2018\n2019\n\ 2020\n2021\n2022\n2023\n2024\n2025" - source_sentence: Why are LLM use-cases involving long inputs considered more interesting than those relying solely on short prompts? sentences: - 'If you think about what they do, this isn’t such a big surprise. The grammar rules of programming languages like Python and JavaScript are massively less complicated than the grammar of Chinese, Spanish or English. It’s still astonishing to me how effective they are though. One of the great weaknesses of LLMs is their tendency to hallucinate—to imagine things that don’t correspond to reality. You would expect this to be a particularly bad problem for code—if an LLM hallucinates a method that doesn’t exist, the code should be useless.' - 'Longer inputs dramatically increase the scope of problems that can be solved with an LLM: you can now throw in an entire book and ask questions about its contents, but more importantly you can feed in a lot of example code to help the model correctly solve a coding problem. LLM use-cases that involve long inputs are far more interesting to me than short prompts that rely purely on the information already baked into the model weights. Many of my tools were built using this pattern.' - 'The boring yet crucial secret behind good system prompts is test-driven development. You don’t write down a system prompt and find ways to test it. You write down tests and find a system prompt that passes them. It’s become abundantly clear over the course of 2024 that writing good automated evals for LLM-powered systems is the skill that’s most needed to build useful applications on top of these models. If you have a strong eval suite you can adopt new models faster, iterate better and build more reliable and useful product features than your competition. Vercel’s Malte Ubl:' - source_sentence: How is the author applying a similar pattern to the Chatbot Arena feature in their Datasette project? sentences: - 'In 2024, almost every significant model vendor released multi-modal models. We saw the Claude 3 series from Anthropic in March, Gemini 1.5 Pro in April (images, audio and video), then September brought Qwen2-VL and Mistral’s Pixtral 12B and Meta’s Llama 3.2 11B and 90B vision models. We got audio input and output from OpenAI in October, then November saw SmolVLM from Hugging Face and December saw image and video models from Amazon Nova. In October I upgraded my LLM CLI tool to support multi-modal models via attachments. It now has plugins for a whole collection of different vision models.' - 'Then in December, the Chatbot Arena team introduced a whole new leaderboard for this feature, driven by users building the same interactive app twice with two different models and voting on the answer. Hard to come up with a more convincing argument that this feature is now a commodity that can be effectively implemented against all of the leading models. I’ve been tinkering with a version of this myself for my Datasette project, with the goal of letting users use prompts to build and iterate on custom widgets and data visualizations against their own data. I also figured out a similar pattern for writing one-shot Python programs, enabled by uv.' - 'Large Language Models They’re actually quite easy to build You can run LLMs on your own devices Hobbyists can build their own fine-tuned models We don’t yet know how to build GPT-4 Vibes Based Development LLMs are really smart, and also really, really dumb Gullibility is the biggest unsolved problem Code may be the best application The ethics of this space remain diabolically complex My blog in 2023' - source_sentence: What are some differing opinions people have about the value and impact of LLMs? sentences: - 'Law is not ethics. Is it OK to train models on people’s content without their permission, when those models will then be used in ways that compete with those people? As the quality of results produced by AI models has increased over the year, these questions have become even more pressing. The impact on human society in terms of these models is already huge, if difficult to objectively measure. People have certainly lost work to them—anecdotally, I’ve seen this for copywriters, artists and translators. There are a great deal of untold stories here. I’m hoping 2024 sees significant amounts of dedicated journalism on this topic. My blog in 2023 Here’s a tag cloud for content I posted to my blog in 2023 (generated using Django SQL Dashboard):' - 'I think this means that, as individual users, we don’t need to feel any guilt at all for the energy consumed by the vast majority of our prompts. The impact is likely neglible compared to driving a car down the street or maybe even watching a video on YouTube. Likewise, training. DeepSeek v3 training for less than $6m is a fantastic sign that training costs can and should continue to drop. For less efficient models I find it useful to compare their energy usage to commercial flights. The largest Llama 3 model cost about the same as a single digit number of fully loaded passenger flights from New York to London. That’s certainly not nothing, but once trained that model can be used by millions of people at no extra training cost.' - 'So far, I think they’re a net positive. I’ve used them on a personal level to improve my productivity (and entertain myself) in all sorts of different ways. I think people who learn how to use them effectively can gain a significant boost to their quality of life. A lot of people are yet to be sold on their value! Some think their negatives outweigh their positives, some think they are all hot air, and some even think they represent an existential threat to humanity. They’re actually quite easy to build The most surprising thing we’ve learned about LLMs this year is that they’re actually quite easy to build.' - source_sentence: 'What are the two main categories of AI agents described in the context? ' sentences: - 'The two main categories I see are people who think AI agents are obviously things that go and act on your behalf—the travel agent model—and people who think in terms of LLMs that have been given access to tools which they can run in a loop as part of solving a problem. The term “autonomy” is often thrown into the mix too, again without including a clear definition. (I also collected 211 definitions on Twitter a few months ago—here they are in Datasette Lite—and had gemini-exp-1206 attempt to summarize them.) Whatever the term may mean, agents still have that feeling of perpetually “coming soon”.' - 'Longer inputs dramatically increase the scope of problems that can be solved with an LLM: you can now throw in an entire book and ask questions about its contents, but more importantly you can feed in a lot of example code to help the model correctly solve a coding problem. LLM use-cases that involve long inputs are far more interesting to me than short prompts that rely purely on the information already baked into the model weights. Many of my tools were built using this pattern.' - 'This remains astonishing to me. I thought a model with the capabilities and output quality of GPT-4 needed a datacenter class server with one or more $40,000+ GPUs. These models take up enough of my 64GB of RAM that I don’t run them often—they don’t leave much room for anything else. The fact that they run at all is a testament to the incredible training and inference performance gains that we’ve figured out over the past year. It turns out there was a lot of low-hanging fruit to be harvested in terms of model efficiency. I expect there’s still more to come.' pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l results: - task: type: information-retrieval name: Information Retrieval dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy@1 value: 0.9166666666666666 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 1.0 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 1.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 1.0 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9166666666666666 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3333333333333333 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.20000000000000004 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.10000000000000002 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.9166666666666666 name: Cosine Recall@1 - type: cosine_recall@3 value: 1.0 name: Cosine Recall@3 - type: cosine_recall@5 value: 1.0 name: Cosine Recall@5 - type: cosine_recall@10 value: 1.0 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9692441461309548 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9583333333333334 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.9583333333333334 name: Cosine Map@100 --- # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("geetach/legal-ft-a201f63a-cb7a-4d10-aa78-6229827dff89") # Run inference sentences = [ 'What are the two main categories of AI agents described in the context? ', 'The two main categories I see are people who think AI agents are obviously things that go and act on your behalf—the travel agent model—and people who think in terms of LLMs that have been given access to tools which they can run in a loop as part of solving a problem. The term “autonomy” is often thrown into the mix too, again without including a clear definition.\n(I also collected 211 definitions on Twitter a few months ago—here they are in Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)\nWhatever the term may mean, agents still have that feeling of perpetually “coming soon”.', 'This remains astonishing to me. I thought a model with the capabilities and output quality of GPT-4 needed a datacenter class server with one or more $40,000+ GPUs.\nThese models take up enough of my 64GB of RAM that I don’t run them often—they don’t leave much room for anything else.\nThe fact that they run at all is a testament to the incredible training and inference performance gains that we’ve figured out over the past year. It turns out there was a lot of low-hanging fruit to be harvested in terms of model efficiency. I expect there’s still more to come.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9167 | | cosine_accuracy@3 | 1.0 | | cosine_accuracy@5 | 1.0 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.9167 | | cosine_precision@3 | 0.3333 | | cosine_precision@5 | 0.2 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.9167 | | cosine_recall@3 | 1.0 | | cosine_recall@5 | 1.0 | | cosine_recall@10 | 1.0 | | **cosine_ndcg@10** | **0.9692** | | cosine_mrr@10 | 0.9583 | | cosine_map@100 | 0.9583 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 156 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 156 samples: | | sentence_0 | sentence_1 | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 21.3 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 135.15 tokens</li><li>max: 214 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>When did Meta release the original Llama model? </code> | <code>Then in February, Meta released Llama. And a few weeks later in March, Georgi Gerganov released code that got it working on a MacBook.<br>I wrote about how Large language models are having their Stable Diffusion moment, and with hindsight that was a very good call!<br>This unleashed a whirlwind of innovation, which was accelerated further in July when Meta released Llama 2—an improved version which, crucially, included permission for commercial use.<br>Today there are literally thousands of LLMs that can be run locally, on all manner of different devices.</code> | | <code>What was significant about the release of Llama 2 in July?</code> | <code>Then in February, Meta released Llama. And a few weeks later in March, Georgi Gerganov released code that got it working on a MacBook.<br>I wrote about how Large language models are having their Stable Diffusion moment, and with hindsight that was a very good call!<br>This unleashed a whirlwind of innovation, which was accelerated further in July when Meta released Llama 2—an improved version which, crucially, included permission for commercial use.<br>Today there are literally thousands of LLMs that can be run locally, on all manner of different devices.</code> | | <code>Why does the author find the term “agents” frustrating? </code> | <code>“Agents” still haven’t really happened yet<br>I find the term “agents” extremely frustrating. It lacks a single, clear and widely understood meaning... but the people who use the term never seem to acknowledge that.<br>If you tell me that you are building “agents”, you’ve conveyed almost no information to me at all. Without reading your mind I have no way of telling which of the dozens of possible definitions you are talking about.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `num_train_epochs`: 10 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | cosine_ndcg@10 | |:-----:|:----:|:--------------:| | 1.0 | 16 | 0.9792 | | 2.0 | 32 | 0.9484 | | 3.0 | 48 | 0.9430 | | 3.125 | 50 | 0.9430 | | 4.0 | 64 | 0.9401 | | 5.0 | 80 | 0.9609 | | 6.0 | 96 | 0.9692 | | 6.25 | 100 | 0.9692 | | 7.0 | 112 | 0.9692 | | 8.0 | 128 | 0.9692 | | 9.0 | 144 | 0.9692 | | 9.375 | 150 | 0.9692 | | 10.0 | 160 | 0.9692 | ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 4.1.0 - Transformers: 4.51.3 - PyTorch: 2.6.0+cu124 - Accelerate: 1.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
noureldinayman/gemma-3-1-finetuned_v1
noureldinayman
2025-05-05T02:19:09Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3_text", "trl", "en", "base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-05T02:18:55Z
--- base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3_text - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** noureldinayman - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RLHF-And-Friends/SFT-TLDR-Llama-3.2-3B-SMALL
RLHF-And-Friends
2025-05-05T02:18:51Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "sft", "dataset:tldr-sft", "base_model:meta-llama/Llama-3.2-3B", "base_model:finetune:meta-llama/Llama-3.2-3B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T02:15:00Z
--- base_model: meta-llama/Llama-3.2-3B datasets: tldr-sft library_name: transformers model_name: SFT-TLDR-Llama-3.2-3B-SMALL tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for SFT-TLDR-Llama-3.2-3B-SMALL This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on the [tldr-sft](https://huggingface.co/datasets/tldr-sft) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="RLHF-And-Friends/SFT-TLDR-Llama-3.2-3B-SMALL", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/RADFAN/SFT-TLDR/runs/4ooxsjg8) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/Aya-X-Mod-i1-GGUF
mradermacher
2025-05-05T02:16:21Z
0
0
transformers
[ "transformers", "gguf", "matrixportal", "tr", "en", "base_model:matrixportal/Aya-X-Mod", "base_model:quantized:matrixportal/Aya-X-Mod", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-04T21:22:59Z
--- base_model: matrixportal/Aya-X-Mod language: - tr - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - matrixportal --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/matrixportal/Aya-X-Mod <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Aya-X-Mod-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-IQ1_S.gguf) | i1-IQ1_S | 2.3 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-IQ1_M.gguf) | i1-IQ1_M | 2.5 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.3 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-Q2_K.gguf) | i1-Q2_K | 3.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-IQ3_S.gguf) | i1-IQ3_S | 4.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-IQ3_M.gguf) | i1-IQ3_M | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-Q4_1.gguf) | i1-Q4_1 | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Aya-X-Mod-i1-GGUF/resolve/main/Aya-X-Mod.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
allura-org/remnant-qwen3-8b
allura-org
2025-05-05T02:14:31Z
0
1
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "roleplay", "conversational", "axolotl", "qwen", "base_model:Qwen/Qwen3-8B-Base", "base_model:finetune:Qwen/Qwen3-8B-Base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T18:45:03Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen3-8B-Base tags: - roleplay - conversational - axolotl - qwen --- # Remnant Qwen3 8b (series 1) [English](./README.md) | [简体中文](./README-cn.md) *There's a wisp of dust in the air. It feels like its from a bygone era, but you don't know where from. It lands on your tongue. It tastes nice.* ![image/png](https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/_ovgodU331FO4YAqFGCnk.png) Remnant is a series of finetuned LLMs focused on SFW and NSFW roleplaying and conversation. ## Quants GGUF: - Todo! EXL3: - Todo! EXL2: - Todo! MISC: - Todo! ## Recommended Settings Chat template: ChatML. Apparently Llama 3 format works too, though? Ymmv :3 Samplers: - `0.8` temp - `0.1` min_p - `0.5` presence penalty ## Credits Humongous thanks to Allura, ilya <3 Big thanks to the developers of Axolotl (whose training framework I used), Tongyi Qianwen/Qwen/Alibaba (whose model I used), Prime Intellect (whose GPUs I used), and my bank (whose debit card I used) ## Misc [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.10.0.dev0` ```yaml # === Model Configuration === base_model: Qwen/Qwen3-8B-Base load_in_8bit: false load_in_4bit: false # === Training Setup === num_epochs: 2 micro_batch_size: 32 gradient_accumulation_steps: 1 sequence_len: 8192 sample_packing: true pad_to_sequence_len: true # === Hyperparameter Configuration === optimizer: apollo_adamw_layerwise # Apollo-mini configuration: optim_args: "proj=random,rank=1,scale=128.0,scale_type=tensor,update_proj_gap=200" # Regular Apollo configuration: # optim_args: optim_target_modules: all_linear learning_rate: 2e-5 lr_scheduler: rex weight_decay: 0.01 warmup_ratio: 0 # === Data Configuration === datasets: - path: allura-org/inkmix-v3.0 type: chat_template split: train field_messages: conversations message_field_role: from message_field_content: value dataset_prepared_path: last_run_prepared chat_template: chatml # === Plugins === plugins: - axolotl.integrations.liger.LigerPlugin - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin # === Hardware Optimization === gradient_checkpointing: unsloth gradient_checkpointing_kwargs: use_reentrant: false liger_rope: true liger_rms_norm: true liger_glu_activation: true cut_cross_entropy: true # === Wandb Tracking === wandb_project: qwen3-8b-inkmix-v3 # === Checkpointing === saves_per_epoch: 2 save_total_limit: 3 # === Advanced Settings === output_dir: /ephemeral/ckpts bf16: auto flash_attention: true train_on_inputs: false group_by_length: false logging_steps: 1 trust_remote_code: true ``` </details>
Palu1006/ner-bert-lenerbr-v2
Palu1006
2025-05-05T02:14:21Z
18
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:lener_br", "base_model:neuralmind/bert-base-portuguese-cased", "base_model:finetune:neuralmind/bert-base-portuguese-cased", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-03-29T14:39:55Z
--- library_name: transformers license: mit base_model: neuralmind/bert-base-portuguese-cased tags: - generated_from_trainer datasets: - lener_br metrics: - precision - recall - f1 - accuracy model-index: - name: ner-bert-lenerbr-v2 results: - task: name: Token Classification type: token-classification dataset: name: lener_br type: lener_br config: lener_br split: validation args: lener_br metrics: - name: Precision type: precision value: 0.8383898473131073 - name: Recall type: recall value: 0.909247311827957 - name: F1 type: f1 value: 0.8723821314350563 - name: Accuracy type: accuracy value: 0.9698599661724595 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ner-bert-lenerbr-v2 This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the lener_br dataset. It achieves the following results on the evaluation set: - Loss: 0.1931 - Precision: 0.8384 - Recall: 0.9092 - F1: 0.8724 - Accuracy: 0.9699 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0601 | 1.0 | 979 | 0.1134 | 0.8575 | 0.8516 | 0.8546 | 0.9715 | | 0.0345 | 2.0 | 1958 | 0.1402 | 0.7896 | 0.9022 | 0.8421 | 0.9657 | | 0.0243 | 3.0 | 2937 | 0.1350 | 0.8124 | 0.9060 | 0.8566 | 0.9696 | | 0.0256 | 4.0 | 3916 | 0.1592 | 0.7624 | 0.9073 | 0.8286 | 0.9640 | | 0.0143 | 5.0 | 4895 | 0.1951 | 0.8462 | 0.8983 | 0.8715 | 0.9678 | | 0.0139 | 6.0 | 5874 | 0.1874 | 0.8252 | 0.9110 | 0.8660 | 0.9679 | | 0.0051 | 7.0 | 6853 | 0.1685 | 0.8301 | 0.9049 | 0.8659 | 0.9692 | | 0.0067 | 8.0 | 7832 | 0.1931 | 0.8384 | 0.9092 | 0.8724 | 0.9699 | | 0.0018 | 9.0 | 8811 | 0.2004 | 0.8206 | 0.9110 | 0.8634 | 0.9692 | | 0.0044 | 10.0 | 9790 | 0.2000 | 0.8295 | 0.9090 | 0.8674 | 0.9694 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
hellojahid/carDD_bbox_train_only_lora
hellojahid
2025-05-05T02:11:29Z
0
0
peft
[ "peft", "safetensors", "llava_llama", "arxiv:1910.09700", "base_model:liuhaotian/llava-v1.5-13b", "base_model:adapter:liuhaotian/llava-v1.5-13b", "region:us" ]
null
2025-05-05T01:46:19Z
--- base_model: liuhaotian/llava-v1.5-13b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf
RichardErkhov
2025-05-05T02:11:28Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-04T23:29:20Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) IE_L3_1000steps_1e6rate_05beta_cSFTDPO - GGUF - Model creator: https://huggingface.co/tsavage68/ - Original model: https://huggingface.co/tsavage68/IE_L3_1000steps_1e6rate_05beta_cSFTDPO/ | Name | Quant method | Size | | ---- | ---- | ---- | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q2_K.gguf) | Q2_K | 2.96GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.IQ3_S.gguf) | IQ3_S | 3.43GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.IQ3_M.gguf) | IQ3_M | 3.52GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q3_K.gguf) | Q3_K | 3.74GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q4_0.gguf) | Q4_0 | 4.34GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q4_K.gguf) | Q4_K | 4.58GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q4_1.gguf) | Q4_1 | 4.78GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q5_0.gguf) | Q5_0 | 5.21GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q5_K.gguf) | Q5_K | 5.34GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q5_1.gguf) | Q5_1 | 5.65GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q6_K.gguf) | Q6_K | 6.14GB | | [IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/tsavage68_-_IE_L3_1000steps_1e6rate_05beta_cSFTDPO-gguf/blob/main/IE_L3_1000steps_1e6rate_05beta_cSFTDPO.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers license: llama3 base_model: tsavage68/IE_L3_1000steps_1e6rate_SFT tags: - trl - dpo - generated_from_trainer model-index: - name: IE_L3_1000steps_1e6rate_05beta_cSFTDPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IE_L3_1000steps_1e6rate_05beta_cSFTDPO This model is a fine-tuned version of [tsavage68/IE_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/IE_L3_1000steps_1e6rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1802 - Rewards/chosen: -1.4168 - Rewards/rejected: -13.8543 - Rewards/accuracies: 0.7400 - Rewards/margins: 12.4374 - Logps/rejected: -103.3358 - Logps/chosen: -85.6314 - Logits/rejected: -0.7970 - Logits/chosen: -0.7188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.1906 | 0.4 | 50 | 0.1802 | -1.0109 | -11.1903 | 0.7400 | 10.1794 | -98.0078 | -84.8196 | -0.7939 | -0.7206 | | 0.1386 | 0.8 | 100 | 0.1802 | -1.2190 | -12.1625 | 0.7400 | 10.9435 | -99.9523 | -85.2358 | -0.7944 | -0.7197 | | 0.1386 | 1.2 | 150 | 0.1802 | -1.2782 | -12.5852 | 0.7400 | 11.3070 | -100.7976 | -85.3541 | -0.7943 | -0.7189 | | 0.1733 | 1.6 | 200 | 0.1802 | -1.3094 | -13.0296 | 0.7400 | 11.7202 | -101.6864 | -85.4166 | -0.7948 | -0.7186 | | 0.2253 | 2.0 | 250 | 0.1802 | -1.3248 | -13.1625 | 0.7400 | 11.8377 | -101.9522 | -85.4473 | -0.7952 | -0.7186 | | 0.1386 | 2.4 | 300 | 0.1802 | -1.3337 | -13.2622 | 0.7400 | 11.9285 | -102.1515 | -85.4652 | -0.7942 | -0.7174 | | 0.1213 | 2.8 | 350 | 0.1802 | -1.3670 | -13.4507 | 0.7400 | 12.0837 | -102.5286 | -85.5317 | -0.7953 | -0.7178 | | 0.1906 | 3.2 | 400 | 0.1802 | -1.3818 | -13.5334 | 0.7400 | 12.1517 | -102.6941 | -85.5613 | -0.7964 | -0.7189 | | 0.1906 | 3.6 | 450 | 0.1802 | -1.3800 | -13.5899 | 0.7400 | 12.2099 | -102.8071 | -85.5577 | -0.7964 | -0.7189 | | 0.2079 | 4.0 | 500 | 0.1802 | -1.3816 | -13.6722 | 0.7400 | 12.2906 | -102.9716 | -85.5610 | -0.7966 | -0.7187 | | 0.156 | 4.4 | 550 | 0.1802 | -1.4142 | -13.7800 | 0.7400 | 12.3657 | -103.1872 | -85.6262 | -0.7956 | -0.7175 | | 0.1213 | 4.8 | 600 | 0.1802 | -1.3864 | -13.7736 | 0.7400 | 12.3872 | -103.1744 | -85.5705 | -0.7974 | -0.7192 | | 0.1906 | 5.2 | 650 | 0.1802 | -1.4252 | -13.8450 | 0.7400 | 12.4197 | -103.3172 | -85.6483 | -0.7969 | -0.7187 | | 0.2426 | 5.6 | 700 | 0.1802 | -1.4087 | -13.8154 | 0.7400 | 12.4068 | -103.2581 | -85.6151 | -0.7974 | -0.7196 | | 0.2599 | 6.0 | 750 | 0.1802 | -1.4077 | -13.8712 | 0.7400 | 12.4635 | -103.3696 | -85.6131 | -0.7977 | -0.7194 | | 0.1213 | 6.4 | 800 | 0.1802 | -1.4158 | -13.9034 | 0.7400 | 12.4876 | -103.4339 | -85.6293 | -0.7977 | -0.7195 | | 0.2426 | 6.8 | 850 | 0.1802 | -1.4105 | -13.8922 | 0.7400 | 12.4817 | -103.4116 | -85.6187 | -0.7979 | -0.7200 | | 0.1733 | 7.2 | 900 | 0.1802 | -1.4075 | -13.8657 | 0.7400 | 12.4582 | -103.3587 | -85.6128 | -0.7970 | -0.7189 | | 0.1386 | 7.6 | 950 | 0.1802 | -1.4138 | -13.8523 | 0.7400 | 12.4386 | -103.3319 | -85.6253 | -0.7971 | -0.7188 | | 0.156 | 8.0 | 1000 | 0.1802 | -1.4168 | -13.8543 | 0.7400 | 12.4374 | -103.3358 | -85.6314 | -0.7970 | -0.7188 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.0.0+cu117 - Datasets 3.0.0 - Tokenizers 0.19.1
lulucas3/llama-customized-for-me-try1
lulucas3
2025-05-05T02:09:43Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-2-7b-chat-bnb-4bit", "base_model:adapter:unsloth/llama-2-7b-chat-bnb-4bit", "region:us" ]
null
2025-05-05T02:07:11Z
--- base_model: unsloth/llama-2-7b-chat-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
carozum/gemma-7b-it-raft
carozum
2025-05-05T02:09:38Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-05T02:09:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pranjalsahu/qwen2-7b-instruct-trl-sft-ChartQA-1
pranjalsahu
2025-05-05T02:07:55Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T02:01:52Z
--- base_model: Qwen/Qwen2-VL-7B-Instruct library_name: transformers model_name: qwen2-7b-instruct-trl-sft-ChartQA-1 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen2-7b-instruct-trl-sft-ChartQA-1 This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="pranjalsahu/qwen2-7b-instruct-trl-sft-ChartQA-1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/pranjalsahu5/qwen2-7b-instruct-trl-sft-ChartQA-1/runs/9g96wyry) This model was trained with SFT. ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.50.0.dev0 - Pytorch: 2.3.1 - Datasets: 3.3.1 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
darkc0de/Xortron2025
darkc0de
2025-05-05T02:04:57Z
11,904
7
null
[ "gguf", "mistral", "uncensored", "unsloth", "dpo", "sft", "harmful", "text-generation", "en", "dataset:huihui-ai/Guilherme34_uncensor", "dataset:mlabonne/orpo-dpo-mix-40k-flat", "dataset:Undi95/toxic-dpo-v0.1-NoWarning", "base_model:darkc0de/Xortron24DPO", "base_model:quantized:darkc0de/Xortron24DPO", "license:wtfpl", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-04T00:54:44Z
--- license: wtfpl language: - en pipeline_tag: text-generation tags: - uncensored - gguf - unsloth - dpo - sft - harmful datasets: - huihui-ai/Guilherme34_uncensor - mlabonne/orpo-dpo-mix-40k-flat - Undi95/toxic-dpo-v0.1-NoWarning base_model: - darkc0de/Xortron24DPO --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6540a02d1389943fef4d2640/2Q4ppAg4E9WMCocGYRoUb.jpeg) **Xortron2025**, **Uncensored** Large Language Model for **Offline** and **Local** use. Please use **responsibly**, or at least **discretely**. Run with **LMstudio** or **GPT4ALL** You'll need **21GB+** RAM
kokovova/e0bdbe08-20a1-4ed7-9521-c951bed05895
kokovova
2025-05-05T02:04:23Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/codellama-7b", "base_model:adapter:unsloth/codellama-7b", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-05T01:52:54Z
--- library_name: peft license: apache-2.0 base_model: unsloth/codellama-7b tags: - axolotl - generated_from_trainer model-index: - name: e0bdbe08-20a1-4ed7-9521-c951bed05895 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/codellama-7b bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 45a0a7b62fa7d296_train_data.json ds_type: json format: custom path: /workspace/input_data/45a0a7b62fa7d296_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: kokovova/e0bdbe08-20a1-4ed7-9521-c951bed05895 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 400 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/45a0a7b62fa7d296_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: cc7a0e21-e4e7-4f70-933a-9cc6118c59d5 wandb_project: s56-4 wandb_run: your_name wandb_runid: cc7a0e21-e4e7-4f70-933a-9cc6118c59d5 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # e0bdbe08-20a1-4ed7-9521-c951bed05895 This model is a fine-tuned version of [unsloth/codellama-7b](https://huggingface.co/unsloth/codellama-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2966 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.9013 | 0.2402 | 400 | 1.2966 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/Medra-i1-GGUF
mradermacher
2025-05-05T02:01:00Z
0
0
transformers
[ "transformers", "gguf", "text-generation", "medical-ai", "question-answering", "summarization", "dermatology", "gemma-3", "qlora", "unsloth", "fine-tuned", "en", "ro", "dataset:qiaojin/PubMedQA", "dataset:Mreeb/Dermatology-Question-Answer-Dataset-For-Fine-Tuning", "dataset:lavita/MedQuAD", "base_model:drwlf/Medra", "base_model:quantized:drwlf/Medra", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
question-answering
2025-05-05T00:46:38Z
--- base_model: drwlf/Medra datasets: - qiaojin/PubMedQA - Mreeb/Dermatology-Question-Answer-Dataset-For-Fine-Tuning - lavita/MedQuAD language: - en - ro library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation - medical-ai - question-answering - summarization - dermatology - gemma-3 - qlora - unsloth - fine-tuned --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/drwlf/Medra <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Medra-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-IQ1_M.gguf) | i1-IQ1_M | 1.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Medra-i1-GGUF/resolve/main/Medra.i1-Q6_K.gguf) | i1-Q6_K | 3.3 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
hxyscott/test_quick_finetune
hxyscott
2025-05-05T01:55:59Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T00:01:29Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
juhw/q479
juhw
2025-05-05T01:48:08Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T01:44:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abatutinMP/tst_16bit_v2
abatutinMP
2025-05-05T01:44:59Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Llama-3.2-1B-Instruct", "base_model:finetune:unsloth/Llama-3.2-1B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T01:43:33Z
--- base_model: unsloth/Llama-3.2-1B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** abatutinMP - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MinaMila/llama_instbase_3b_LoRa_ACSEmployment_2_ep7_22
MinaMila
2025-05-05T01:44:35Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-05T01:44:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
infogep/290bbd8d-6b74-4fd3-aed8-c94e9dff4396
infogep
2025-05-05T01:40:14Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-1B", "base_model:adapter:unsloth/Llama-3.2-1B", "license:llama3.2", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-05T01:31:42Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-1B tags: - axolotl - generated_from_trainer model-index: - name: 290bbd8d-6b74-4fd3-aed8-c94e9dff4396 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/Llama-3.2-1B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - cb941ee15ac5879e_train_data.json ds_type: json format: custom path: /workspace/input_data/cb941ee15ac5879e_train_data.json type: field_instruction: question field_output: chosen format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: infogep/290bbd8d-6b74-4fd3-aed8-c94e9dff4396 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 500 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/cb941ee15ac5879e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: eb9068bc-4e56-4087-a32f-937f527f23aa wandb_project: s56-7 wandb_run: your_name wandb_runid: eb9068bc-4e56-4087-a32f-937f527f23aa warmup_steps: 25 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 290bbd8d-6b74-4fd3-aed8-c94e9dff4396 This model is a fine-tuned version of [unsloth/Llama-3.2-1B](https://huggingface.co/unsloth/Llama-3.2-1B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4129 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 25 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3874 | 0.0974 | 500 | 1.4129 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
dimasik2987/d0b07cd3-70c0-41d0-8aa2-7a9172a79a22
dimasik2987
2025-05-05T01:36:59Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:01-ai/Yi-1.5-9B-Chat-16K", "base_model:adapter:01-ai/Yi-1.5-9B-Chat-16K", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-05T00:47:57Z
--- library_name: peft license: apache-2.0 base_model: 01-ai/Yi-1.5-9B-Chat-16K tags: - axolotl - generated_from_trainer model-index: - name: d0b07cd3-70c0-41d0-8aa2-7a9172a79a22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: 01-ai/Yi-1.5-9B-Chat-16K bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - e18896165f133259_train_data.json ds_type: json format: custom path: /workspace/input_data/e18896165f133259_train_data.json type: field_input: tag_list field_instruction: title field_output: pseudo_caption format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: dimasik2987/d0b07cd3-70c0-41d0-8aa2-7a9172a79a22 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_steps: 400 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/e18896165f133259_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 746a2230-4d70-43c5-9b49-3cbb01738510 wandb_project: s56-28 wandb_run: your_name wandb_runid: 746a2230-4d70-43c5-9b49-3cbb01738510 warmup_steps: 20 weight_decay: 0.01 xformers_attention: true ``` </details><br> # d0b07cd3-70c0-41d0-8aa2-7a9172a79a22 This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3025 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 20 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3595 | 0.0658 | 400 | 1.3025 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
DevQuasar/huihui-ai.Qwen3-1.7B-abliterated-GGUF
DevQuasar
2025-05-05T01:36:16Z
0
0
null
[ "gguf", "text-generation", "base_model:huihui-ai/Qwen3-1.7B-abliterated", "base_model:quantized:huihui-ai/Qwen3-1.7B-abliterated", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T01:22:03Z
--- base_model: - huihui-ai/Qwen3-1.7B-abliterated pipeline_tag: text-generation --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [huihui-ai/Qwen3-1.7B-abliterated](https://huggingface.co/huihui-ai/Qwen3-1.7B-abliterated) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
ThatOrJohn/road-surface-grip-austin
ThatOrJohn
2025-05-05T01:32:19Z
0
0
null
[ "en", "license:mit", "region:us" ]
null
2025-05-05T00:20:20Z
--- license: mit language: - en --- # Road Grip Prediction with XGBoost This repository contains a trained XGBoost model for predicting road surface grip conditions (GOOD, FAIR, POOR) using sensor and weather data from the [City of Austin's real-time road conditions](https://data.austintexas.gov/Transportation-and-Mobility/Real-Time-Road-Conditions/ypbq-i42h/about_data) feed. Data at training time comes from IceSight Model 5433-3X sensors. ## 🧠 Model Summary - **Algorithm**: XGBoost Classifier - **Input features**: Surface temperature, air temperature, humidity, etc. - **Target**: `grip_text` (categorized as 0=GOOD, 1=FAIR, 2=POOR) - **Accuracy**: ~99.5% on test data - **Training set size**: ~1 million rows ## 🚀 Quick Start ### 1. Install dependencies ```bash pip install -r requirements.txt ``` ### 2. Run the notebook ```bash jupyter notebook RoadGrip_XGBoost.ipynb ``` Or open the notebook in [Google Colab](https://colab.research.google.com/). ### 3. Make a prediction with the trained model ```python import joblib import pandas as pd model = joblib.load("best_grip_model_xgb.pkl") sample = pd.DataFrame([{ 'air_temp_primary': 12.4, 'air_temp_secondary': 12.6, 'air_temp_tertiary': 12.5, 'temp_surface': 11.2, 'relative_humidity': 84.0 }]) pred = model.predict(sample) print("Predicted grip:", pred[0]) ``` --- ## 🔍 Files - `RoadGrip_XGBoost.ipynb`: Jupyter Notebook for model training and evaluation - `best_grip_model_xgb.pkl`: Trained XGBoost model (multiclass classifier) - `requirements.txt`: Python dependencies ## 📈 Model Card (Hugging Face) 👉 [View Model Card on Hugging Face](https://huggingface.co/your-username/road-grip-xgb) ## 🗺️ Next Steps - Build a site that forecasts future road grip using weather forecast data (via Open-Meteo) - Display predictions on a map centered on Austin, TX - Add animations and saved locations ---
goosull/Llama-3.2-1B-ko-kowiki-Instruct
goosull
2025-05-05T01:28:18Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-05T01:26:22Z
--- base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** goosull - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
wolfofbackstreet/melotts_chinese_mix_english_onnx
wolfofbackstreet
2025-05-05T01:24:28Z
0
1
null
[ "onnx", "text-to-audio", "zh", "en", "base_model:myshell-ai/MeloTTS-Chinese", "base_model:quantized:myshell-ai/MeloTTS-Chinese", "license:mit", "region:us" ]
text-to-audio
2025-04-28T05:15:49Z
--- license: mit language: - zh - en base_model: - myshell-ai/MeloTTS-Chinese pipeline_tag: text-to-audio --- ### Example ```python from typing import Iterable, List, Tuple import jieba import onnxruntime as ort import soundfile as sf import torch class Lexicon: def __init__(self, lexion_filename: str, tokens_filename: str): tokens = dict() with open(tokens_filename, encoding="utf-8") as f: for line in f: s, i = line.split() tokens[s] = int(i) lexicon = dict() with open(lexion_filename, encoding="utf-8") as f: for line in f: splits = line.split() word_or_phrase = splits[0] phone_tone_list = splits[1:] assert len(phone_tone_list) & 1 == 0, len(phone_tone_list) phones = phone_tone_list[: len(phone_tone_list) // 2] phones = [tokens[p] for p in phones] tones = phone_tone_list[len(phone_tone_list) // 2 :] tones = [int(t) for t in tones] lexicon[word_or_phrase] = (phones, tones) lexicon["呣"] = lexicon["母"] lexicon["嗯"] = lexicon["恩"] self.lexicon = lexicon punctuation = ["!", "?", "…", ",", ".", "'", "-"] for p in punctuation: i = tokens[p] tone = 0 self.lexicon[p] = ([i], [tone]) self.lexicon[" "] = ([tokens["_"]], [0]) def _convert(self, text: str) -> Tuple[List[int], List[int]]: phones = [] tones = [] if text == ",": text = "," elif text == "。": text = "." elif text == "!": text = "!" elif text == "?": text = "?" if text not in self.lexicon: print("t", text) if len(text) > 1: for w in text: print("w", w) p, t = self.convert(w) if p: phones += p tones += t return phones, tones phones, tones = self.lexicon[text] return phones, tones def convert(self, text_list: Iterable[str]) -> Tuple[List[int], List[int]]: phones = [] tones = [] for text in text_list: print(text) p, t = self._convert(text) phones += p tones += t return phones, tones class OnnxModel: def __init__(self, filename): session_opts = ort.SessionOptions() session_opts.inter_op_num_threads = 1 session_opts.intra_op_num_threads = 4 self.session_opts = session_opts self.model = ort.InferenceSession( filename, sess_options=self.session_opts, providers=["CPUExecutionProvider"], ) meta = self.model.get_modelmeta().custom_metadata_map self.bert_dim = int(meta["bert_dim"]) self.ja_bert_dim = int(meta["ja_bert_dim"]) self.add_blank = int(meta["add_blank"]) self.sample_rate = int(meta["sample_rate"]) self.speaker_id = int(meta["speaker_id"]) self.lang_id = int(meta["lang_id"]) self.sample_rate = int(meta["sample_rate"]) def __call__(self, x, tones): """ Args: x: 1-D int64 torch tensor tones: 1-D int64 torch tensor """ x = x.unsqueeze(0) tones = tones.unsqueeze(0) print(x.shape, tones.shape) sid = torch.tensor([self.speaker_id], dtype=torch.int64) noise_scale = torch.tensor([0.6], dtype=torch.float32) length_scale = torch.tensor([1.0], dtype=torch.float32) noise_scale_w = torch.tensor([0.8], dtype=torch.float32) x_lengths = torch.tensor([x.shape[-1]], dtype=torch.int64) y = self.model.run( ["y"], { "x": x.numpy(), "x_lengths": x_lengths.numpy(), "tones": tones.numpy(), "sid": sid.numpy(), "noise_scale": noise_scale.numpy(), "noise_scale_w": noise_scale_w.numpy(), "length_scale": length_scale.numpy(), }, )[0][0][0] return y def main(): lexicon = Lexicon(lexion_filename="./lexicon.txt", tokens_filename="./tokens.txt") text = "这是一个使用 next generation kaldi 的 text to speech 中英文例子. Thank you! 你觉得如何呢? are you ok? Fantastic! How about you?" text = text.lower() # this step is crutial for split words correctly s = jieba.cut(text, HMM=True) phones, tones = lexicon.convert(s) model = OnnxModel("./model.onnx") if model.add_blank: new_phones = [0] * (2 * len(phones) + 1) new_tones = [0] * (2 * len(tones) + 1) new_phones[1::2] = phones new_tones[1::2] = tones phones = new_phones tones = new_tones phones = torch.tensor(phones, dtype=torch.int64) tones = torch.tensor(tones, dtype=torch.int64) print(phones.shape, tones.shape) y = model(x=phones, tones=tones) sf.write("./test.wav", y, model.sample_rate) if __name__ == "__main__": main() ```
Lucycao110/LucyModel
Lucycao110
2025-05-05T01:23:34Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-05T01:23:34Z
--- license: apache-2.0 ---
Subimal10/llama3b-legal-sft
Subimal10
2025-05-05T01:21:17Z
0
1
transformers
[ "transformers", "safetensors", "fine-tuned", "llama-3", "lora", "legal", "india", "text-generation", "en", "dataset:Subimal10/indian-legal-data-cleaned", "dataset:Hashif/indianlegal-llama-2", "dataset:Prarabdha/indian-legal-acts", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:adapter:meta-llama/Llama-3.2-3B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T10:32:13Z
--- license: apache-2.0 language: - en base_model: meta-llama/Llama-3.2-3B-Instruct datasets: - Subimal10/indian-legal-data-cleaned - Hashif/indianlegal-llama-2 - Prarabdha/indian-legal-acts metrics: - name: perplexity type: float value: 1.53 new_version: 1.0.0 library_name: transformers pipeline_tag: text-generation tags: - fine-tuned - llama-3 - lora - legal - india --- # llama3b-legal-sft **Fine-tuned** LoRA adapter on Meta Llama-3.2-3B-Instruct, 4-bit quantization **Task**: Draft Indian-law documents (eviction notices, affidavits, show-cause notices, leases, POAs, etc.) --- ## Model Details - **Base model**: `meta-llama/Llama-3.2-3B-Instruct` - **Fine-tuning recipe**: - Data: 2.7 M cleaned Q&A pairs from Prarabdha gated repos - +11 K examples from `Hashif/indianlegal-llama-2` - 90 % train / 10 % valid split - 4-bit quant + LoRA (r=8, α=16, dropout=0.1) - Trainer: custom `SFTTrainer`, fp16, batch=4→16, max_steps=20 000 --- ## Evaluation | Metric | Value | |------------|-------| | Perplexity | 1.53 | > **Inference speed** on A100: ~0.5 it/s @ bs=1 --- ## Limitations & Intended Use - **Intended** for drafting legal-style documents under Indian law - **Not** a substitute for qualified legal counsel - May occasionally repeat phrases or lose document structure if prompted poorly --- ## Sample Validation > “✅ Eviction notice generated by this model was reviewed and approved by Advocate Abhishek Chatterjee.” --- ## Usage ```python from transformers import AutoTokenizer, BitsAndBytesConfig, AutoModelForCausalLM from peft import PeftModel import os HF_TOKEN = os.getenv("HF_TOKEN") # or set directly "hf_xxx" REPO_ID = "Subimal10/llama3b-legal-sft" # 1️⃣ Load tokenizer + base model in 4-bit + LoRA adapter tokenizer = AutoTokenizer.from_pretrained(REPO_ID, use_fast=True) bnb_cfg = BitsAndBytesConfig(load_in_4bit=True) base = AutoModelForCausalLM.from_pretrained( REPO_ID, quantization_config=bnb_cfg, device_map="auto", trust_remote_code=True, token=HF_TOKEN, ) model = PeftModel.from_pretrained(base, REPO_ID, device_map="auto", token=HF_TOKEN) model.eval() # 2️⃣ Inference with an instruction prompt prompt = ( "<s>[INST] <<SYS>>\n" "You are a senior contract lawyer.\n" "<</SYS>>\n\n" "### Instruction:\n" "Draft a formal Show Cause Notice under Indian contract law to a contractor for delays in project delivery.\n" "### Response:\n" "[/INST] " ) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) gen_ids = model.generate( **inputs, max_new_tokens=400, do_sample=True, temperature=0.7, top_p=0.9, pad_token_id=tokenizer.eos_token_id, ) completion = tokenizer.decode(gen_ids[0][inputs.input_ids.shape[-1]:], skip_special_tokens=True) print("=== Show Cause Notice ===\n", completion)
Yeana/my_extractive_app
Yeana
2025-05-05T01:19:16Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-05-05T00:08:55Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: my_extractive_app results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_extractive_app This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0478 - Precision: 0.8912 - Recall: 0.9069 - F1: 0.8990 - Accuracy: 0.9828 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0603 | 1.0 | 29010 | 0.0566 | 0.8755 | 0.8920 | 0.8837 | 0.9798 | | 0.0438 | 2.0 | 58020 | 0.0478 | 0.8912 | 0.9069 | 0.8990 | 0.9828 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
fffanx/Llama-3.2-1B-Instruct-GRPO-agent18_E15
fffanx
2025-05-05T01:17:17Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T01:16:47Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent18_E15 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent18_E15 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent18_E15", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent16_E15
fffanx
2025-05-05T01:16:12Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T01:15:44Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent16_E15 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent16_E15 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent16_E15", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ASethi04/meta-llama-Llama-3.1-8B-tulu-sharegpt-second-lora-4-0.0001
ASethi04
2025-05-05T01:14:31Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:27:04Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: meta-llama-Llama-3.1-8B-tulu-sharegpt-second-lora-4-0.0001 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for meta-llama-Llama-3.1-8B-tulu-sharegpt-second-lora-4-0.0001 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-tulu-sharegpt-second-lora-4-0.0001", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/r5qjjozn) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent12_E15
fffanx
2025-05-05T01:14:05Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T01:13:36Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent12_E15 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent12_E15 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent12_E15", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
litxtop/tiny-llama-cpsc254
litxtop
2025-05-05T01:13:51Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T09:41:27Z
--- library_name: transformers license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 tags: - generated_from_trainer model-index: - name: tiny-llama-cpsc254 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-llama-cpsc254 This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cpu - Datasets 3.5.1 - Tokenizers 0.21.1
fffanx/Llama-3.2-1B-Instruct-GRPO-agent11_E15
fffanx
2025-05-05T01:13:33Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T01:13:05Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent11_E15 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent11_E15 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent11_E15", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent8_E15
fffanx
2025-05-05T01:11:59Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T01:11:31Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent8_E15 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent8_E15 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent8_E15", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent7_E15
fffanx
2025-05-05T01:11:27Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T01:10:59Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent7_E15 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent7_E15 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent7_E15", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent5_E15
fffanx
2025-05-05T01:10:23Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T01:09:54Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent5_E15 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent5_E15 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent5_E15", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent1_E15
fffanx
2025-05-05T01:08:15Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T01:07:46Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent1_E15 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent1_E15 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent1_E15", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent0_E15
fffanx
2025-05-05T01:07:43Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T01:07:13Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent0_E15 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent0_E15 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent0_E15", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
8688chris/helldivers2-jarvis-asrV3
8688chris
2025-05-05T01:07:38Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-base-960h", "base_model:finetune:facebook/wav2vec2-base-960h", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-05T00:58:59Z
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-base-960h tags: - generated_from_trainer metrics: - wer model-index: - name: helldivers2-jarvis-asrV3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # helldivers2-jarvis-asrV3 This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 31.7272 - Wer: 0.2086 - Cer: 0.8074 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 524.1516 | 1.0 | 30 | 263.5094 | 0.3714 | 0.8227 | | 364.9497 | 2.0 | 60 | 191.6008 | 0.3314 | 0.8182 | | 284.4412 | 3.0 | 90 | 140.4837 | 0.3029 | 0.8157 | | 230.4569 | 4.0 | 120 | 114.5283 | 0.28 | 0.8141 | | 194.9766 | 5.0 | 150 | 104.1673 | 0.2943 | 0.8136 | | 188.876 | 6.0 | 180 | 79.7826 | 0.2686 | 0.8118 | | 176.3613 | 7.0 | 210 | 83.6582 | 0.2543 | 0.8113 | | 164.0082 | 8.0 | 240 | 78.0135 | 0.2486 | 0.8112 | | 132.4165 | 9.0 | 270 | 81.7094 | 0.2486 | 0.8117 | | 152.0892 | 10.0 | 300 | 67.5544 | 0.24 | 0.8102 | | 129.1771 | 11.0 | 330 | 74.8555 | 0.2486 | 0.8111 | | 132.3275 | 12.0 | 360 | 59.0951 | 0.2371 | 0.8099 | | 121.0191 | 13.0 | 390 | 62.3462 | 0.2371 | 0.8098 | | 123.9875 | 14.0 | 420 | 64.6068 | 0.2314 | 0.8100 | | 127.8401 | 15.0 | 450 | 59.1643 | 0.2343 | 0.8101 | | 101.8537 | 16.0 | 480 | 49.6505 | 0.2257 | 0.8090 | | 105.2752 | 17.0 | 510 | 55.4513 | 0.2286 | 0.8090 | | 106.8253 | 18.0 | 540 | 50.9544 | 0.2229 | 0.8085 | | 90.9927 | 19.0 | 570 | 56.8617 | 0.2257 | 0.8088 | | 86.1412 | 20.0 | 600 | 47.8157 | 0.2143 | 0.8080 | | 107.573 | 21.0 | 630 | 45.2232 | 0.2229 | 0.8080 | | 97.8639 | 22.0 | 660 | 46.7115 | 0.22 | 0.8082 | | 91.8944 | 23.0 | 690 | 39.8069 | 0.2171 | 0.8073 | | 80.8078 | 24.0 | 720 | 38.7170 | 0.2171 | 0.8077 | | 67.9368 | 25.0 | 750 | 40.9773 | 0.2229 | 0.8082 | | 72.6615 | 26.0 | 780 | 44.2405 | 0.22 | 0.8084 | | 85.9681 | 27.0 | 810 | 44.6755 | 0.2171 | 0.8079 | | 82.2137 | 28.0 | 840 | 42.0941 | 0.2171 | 0.8079 | | 77.9647 | 29.0 | 870 | 46.9737 | 0.2171 | 0.8080 | | 70.9503 | 30.0 | 900 | 34.8284 | 0.2171 | 0.8080 | | 71.2584 | 31.0 | 930 | 34.1917 | 0.2229 | 0.8078 | | 60.2431 | 32.0 | 960 | 40.1383 | 0.2171 | 0.8080 | | 64.4503 | 33.0 | 990 | 41.7621 | 0.22 | 0.8082 | | 74.9696 | 34.0 | 1020 | 42.6356 | 0.2143 | 0.8078 | | 83.7667 | 35.0 | 1050 | 34.9446 | 0.2114 | 0.8072 | | 65.5813 | 36.0 | 1080 | 40.3642 | 0.2143 | 0.8079 | | 65.3049 | 37.0 | 1110 | 37.6542 | 0.2114 | 0.8073 | | 68.4417 | 38.0 | 1140 | 46.1513 | 0.2343 | 0.8084 | | 60.9022 | 39.0 | 1170 | 45.9998 | 0.22 | 0.8081 | | 66.6904 | 40.0 | 1200 | 40.2499 | 0.2086 | 0.8079 | | 58.7295 | 41.0 | 1230 | 28.5853 | 0.2086 | 0.8071 | | 62.7956 | 42.0 | 1260 | 28.4951 | 0.2057 | 0.8070 | | 66.9006 | 43.0 | 1290 | 32.7322 | 0.2229 | 0.8074 | | 63.8268 | 44.0 | 1320 | 48.1683 | 0.2314 | 0.8085 | | 56.0921 | 45.0 | 1350 | 40.5450 | 0.2257 | 0.8082 | | 54.8101 | 46.0 | 1380 | 36.3487 | 0.2086 | 0.8075 | | 73.7511 | 47.0 | 1410 | 39.1305 | 0.22 | 0.8075 | | 65.4736 | 48.0 | 1440 | 37.1907 | 0.2171 | 0.8075 | | 47.7848 | 49.0 | 1470 | 34.1053 | 0.2029 | 0.8069 | | 63.2612 | 50.0 | 1500 | 36.3615 | 0.2057 | 0.8074 | | 62.2814 | 51.0 | 1530 | 35.0609 | 0.2057 | 0.8072 | | 70.1596 | 52.0 | 1560 | 42.3561 | 0.22 | 0.8081 | | 54.3056 | 53.0 | 1590 | 46.1524 | 0.22 | 0.8081 | | 71.8594 | 54.0 | 1620 | 28.3508 | 0.22 | 0.8072 | | 49.9168 | 55.0 | 1650 | 37.2288 | 0.2314 | 0.8080 | | 65.9318 | 56.0 | 1680 | 36.7554 | 0.2029 | 0.8071 | | 57.0402 | 57.0 | 1710 | 30.4044 | 0.2057 | 0.8067 | | 64.8804 | 58.0 | 1740 | 31.0801 | 0.2143 | 0.8074 | | 54.3674 | 59.0 | 1770 | 38.3145 | 0.2229 | 0.8081 | | 45.8036 | 60.0 | 1800 | 31.7272 | 0.2086 | 0.8074 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.4.1+cu118 - Datasets 3.5.1 - Tokenizers 0.21.1
mradermacher/Smoothie-Qwen2.5-7B-Instruct-GGUF
mradermacher
2025-05-05T01:06:18Z
0
1
transformers
[ "transformers", "gguf", "dnotitia", "nlp", "llm", "conversation", "chat", "en", "base_model:dnotitia/Smoothie-Qwen2.5-7B-Instruct", "base_model:quantized:dnotitia/Smoothie-Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-02T15:59:39Z
--- base_model: dnotitia/Smoothie-Qwen2.5-7B-Instruct language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - dnotitia - nlp - llm - conversation - chat --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/dnotitia/Smoothie-Qwen2.5-7B-Instruct <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF
mradermacher
2025-05-05T01:05:35Z
2
1
transformers
[ "transformers", "gguf", "dnotitia", "nlp", "llm", "conversation", "chat", "en", "base_model:dnotitia/Smoothie-Qwen2.5-7B-Instruct", "base_model:quantized:dnotitia/Smoothie-Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-02T22:05:54Z
--- base_model: dnotitia/Smoothie-Qwen2.5-7B-Instruct language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - dnotitia - nlp - llm - conversation - chat --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/dnotitia/Smoothie-Qwen2.5-7B-Instruct <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-7B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-7B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF
mradermacher
2025-05-05T01:05:31Z
4
1
transformers
[ "transformers", "gguf", "dnotitia", "nlp", "llm", "conversation", "chat", "en", "base_model:dnotitia/Smoothie-Qwen2.5-14B-Instruct", "base_model:quantized:dnotitia/Smoothie-Qwen2.5-14B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-02T22:08:03Z
--- base_model: dnotitia/Smoothie-Qwen2.5-14B-Instruct language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - dnotitia - nlp - llm - conversation - chat --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/dnotitia/Smoothie-Qwen2.5-14B-Instruct <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen2.5-14B-Instruct-i1-GGUF/resolve/main/Smoothie-Qwen2.5-14B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
fffanx/Llama-3.2-1B-Instruct-GRPO-agent17_E14
fffanx
2025-05-05T00:59:51Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:59:22Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent17_E14 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent17_E14 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent17_E14", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent15_E14
fffanx
2025-05-05T00:58:48Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:58:20Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent15_E14 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent15_E14 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent15_E14", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
GitBag/a_star_final_ppo_math_7_critic
GitBag
2025-05-05T00:56:36Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
token-classification
2025-05-04T14:44:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AnonymousForReview2/watereddown_reranker_pythia_cqtr_epochs
AnonymousForReview2
2025-05-05T00:56:16Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:EleutherAI/pythia-6.9b", "base_model:adapter:EleutherAI/pythia-6.9b", "region:us" ]
null
2025-05-05T00:56:12Z
--- base_model: EleutherAI/pythia-6.9b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
AnonymousForReview2/watereddown_reranker_mistral_cqtr_mlp_only
AnonymousForReview2
2025-05-05T00:56:08Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2025-05-05T00:56:05Z
--- base_model: mistralai/Mistral-7B-v0.1 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
AnonymousForReview2/watereddown_reranker_mistral_cqtr_1epoch
AnonymousForReview2
2025-05-05T00:56:04Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2025-05-05T00:56:00Z
--- base_model: mistralai/Mistral-7B-v0.1 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
GitBag/a_star_final_ppo_math_7_actor
GitBag
2025-05-05T00:54:12Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T14:41:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ziyan98/lul-sft
ziyan98
2025-05-05T00:54:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "conversational", "dataset:akhauriyash/OpenR1_Math_SpeculativeReasoning", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T23:20:03Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B datasets: akhauriyash/OpenR1_Math_SpeculativeReasoning library_name: transformers tags: - generated_from_trainer - open-r1 licence: license --- # Model Card for None This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [akhauriyash/OpenR1_Math_SpeculativeReasoning](https://huggingface.co/datasets/akhauriyash/OpenR1_Math_SpeculativeReasoning) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/xyiiiiiii-tsinghua-university/LuL/runs/5bej5gxt) This model was trained with SFT. ### Framework versions - TRL: 0.16.0 - Transformers: 4.50.0 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent6_E14
fffanx
2025-05-05T00:53:58Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:53:29Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent6_E14 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent6_E14 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent6_E14", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mlfoundations-dev/d1_math_all_1k
mlfoundations-dev
2025-05-05T00:53:16Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T22:27:06Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: d1_math_all_1k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # d1_math_all_1k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_math_all_1k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 24 - total_train_batch_size: 96 - total_eval_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 7.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
shibajustfor/98bd9d23-3750-499e-95ac-d1530862f00f
shibajustfor
2025-05-05T00:52:19Z
0
0
peft
[ "peft", "generated_from_trainer", "base_model:codellama/CodeLlama-7b-Instruct-hf", "base_model:adapter:codellama/CodeLlama-7b-Instruct-hf", "region:us" ]
null
2025-05-05T00:51:52Z
--- library_name: peft tags: - generated_from_trainer base_model: codellama/CodeLlama-7b-Instruct-hf model-index: - name: shibajustfor/98bd9d23-3750-499e-95ac-d1530862f00f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # shibajustfor/98bd9d23-3750-499e-95ac-d1530862f00f This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
fffanx/Llama-3.2-1B-Instruct-GRPO-agent2_E14
fffanx
2025-05-05T00:51:51Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:51:22Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent2_E14 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent2_E14 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent2_E14", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Miriam20252025/Miriam_Araujo
Miriam20252025
2025-05-05T00:49:08Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T23:57:18Z
--- license: apache-2.0 ---
fffanx/Llama-3.2-1B-Instruct-GRPO-agent19_E13
fffanx
2025-05-05T00:45:06Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:44:38Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent19_E13 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent19_E13 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent19_E13", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent18_E13
fffanx
2025-05-05T00:44:35Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:44:06Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent18_E13 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent18_E13 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent18_E13", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent16_E13
fffanx
2025-05-05T00:43:32Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:43:03Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent16_E13 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent16_E13 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent16_E13", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/TEMPURA-Qwen2.5-VL-3B-s2-GGUF
mradermacher
2025-05-05T00:43:10Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "en", "dataset:andaba/TEMPURA-VER", "base_model:andaba/TEMPURA-Qwen2.5-VL-3B-s2", "base_model:quantized:andaba/TEMPURA-Qwen2.5-VL-3B-s2", "license:cc-by-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-05T00:35:09Z
--- base_model: andaba/TEMPURA-Qwen2.5-VL-3B-s2 datasets: - andaba/TEMPURA-VER language: - en library_name: transformers license: cc-by-4.0 quantized_by: mradermacher tags: - text-generation-inference --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/andaba/TEMPURA-Qwen2.5-VL-3B-s2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s2-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s2.Q2_K.gguf) | Q2_K | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s2-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s2.Q3_K_S.gguf) | Q3_K_S | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s2-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s2.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s2-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s2.Q3_K_L.gguf) | Q3_K_L | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s2-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s2.IQ4_XS.gguf) | IQ4_XS | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s2-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s2.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s2-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s2.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s2-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s2.Q5_K_S.gguf) | Q5_K_S | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s2-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s2.Q5_K_M.gguf) | Q5_K_M | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s2-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s2.Q6_K.gguf) | Q6_K | 2.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s2-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s2.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s2-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s2.f16.gguf) | f16 | 6.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
jfrost10/legal-ft-9ed0fe19-8072-40cd-95af-56242e6565ce
jfrost10
2025-05-05T00:42:47Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:156", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:Snowflake/snowflake-arctic-embed-l", "base_model:finetune:Snowflake/snowflake-arctic-embed-l", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-05-05T00:41:25Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:156 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: Snowflake/snowflake-arctic-embed-l widget: - source_sentence: How many tokens can Google’s Gemini series accept in its models? sentences: - 'Just this week, the New York Times launched a landmark lawsuit against OpenAI and Microsoft over this issue. The 69 page PDF is genuinely worth reading—especially the first few pages, which lay out the issues in a way that’s surprisingly easy to follow. The rest of the document includes some of the clearest explanations of what LLMs are, how they work and how they are built that I’ve read anywhere. The legal arguments here are complex. I’m not a lawyer, but I don’t think this one will be easily decided. Whichever way it goes, I expect this case to have a profound impact on how this technology develops in the future.' - 'Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable exception of Claude 2.1 which accepted 200,000. Today every serious provider has a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.' - 'Prompt injection is a natural consequence of this gulibility. I’ve seen precious little progress on tackling that problem in 2024, and we’ve been talking about it since September 2022. I’m beginning to see the most popular idea of “agents” as dependent on AGI itself. A model that’s robust against gulliblity is a very tall order indeed. Evals really matter Anthropic’s Amanda Askell (responsible for much of the work behind Claude’s Character):' - source_sentence: How did the construction of railways in the 1800s impact the environment? sentences: - 'These abilities are just a few weeks old at this point, and I don’t think their impact has been fully felt yet. If you haven’t tried them out yet you really should. Both Gemini and OpenAI offer API access to these features as well. OpenAI started with a WebSocket API that was quite challenging to use, but in December they announced a new WebRTC API which is much easier to get started with. Building a web app that a user can talk to via voice is easy now! Prompt driven app generation is a commodity already This was possible with GPT-4 in 2023, but the value it provides became evident in 2024.' - 'An interesting point of comparison here could be the way railways rolled out around the world in the 1800s. Constructing these required enormous investments and had a massive environmental impact, and many of the lines that were built turned out to be unnecessary—sometimes multiple lines from different companies serving the exact same routes! The resulting bubbles contributed to several financial crashes, see Wikipedia for Panic of 1873, Panic of 1893, Panic of 1901 and the UK’s Railway Mania. They left us with a lot of useful infrastructure and a great deal of bankruptcies and environmental damage. The year of slop' - 'My personal laptop is a 64GB M2 MacBook Pro from 2023. It’s a powerful machine, but it’s also nearly two years old now—and crucially it’s the same laptop I’ve been using ever since I first ran an LLM on my computer back in March 2023 (see Large language models are having their Stable Diffusion moment). That same laptop that could just about run a GPT-3-class model in March last year has now run multiple GPT-4 class models! Some of my notes on that:' - source_sentence: When did Meta release the original Llama model? sentences: - 'Then in December, the Chatbot Arena team introduced a whole new leaderboard for this feature, driven by users building the same interactive app twice with two different models and voting on the answer. Hard to come up with a more convincing argument that this feature is now a commodity that can be effectively implemented against all of the leading models. I’ve been tinkering with a version of this myself for my Datasette project, with the goal of letting users use prompts to build and iterate on custom widgets and data visualizations against their own data. I also figured out a similar pattern for writing one-shot Python programs, enabled by uv.' - 'Then in February, Meta released Llama. And a few weeks later in March, Georgi Gerganov released code that got it working on a MacBook. I wrote about how Large language models are having their Stable Diffusion moment, and with hindsight that was a very good call! This unleashed a whirlwind of innovation, which was accelerated further in July when Meta released Llama 2—an improved version which, crucially, included permission for commercial use. Today there are literally thousands of LLMs that can be run locally, on all manner of different devices.' - 'On the one hand, we keep on finding new things that LLMs can do that we didn’t expect—and that the people who trained the models didn’t expect either. That’s usually really fun! But on the other hand, the things you sometimes have to do to get the models to behave are often incredibly dumb. Does ChatGPT get lazy in December, because its hidden system prompt includes the current date and its training data shows that people provide less useful answers coming up to the holidays? The honest answer is “maybe”! No-one is entirely sure, but if you give it a different date its answers may skew slightly longer.' - source_sentence: What are some companies mentioned that have developed multi-modal audio models? sentences: - 'The boring yet crucial secret behind good system prompts is test-driven development. You don’t write down a system prompt and find ways to test it. You write down tests and find a system prompt that passes them. It’s become abundantly clear over the course of 2024 that writing good automated evals for LLM-powered systems is the skill that’s most needed to build useful applications on top of these models. If you have a strong eval suite you can adopt new models faster, iterate better and build more reliable and useful product features than your competition. Vercel’s Malte Ubl:' - 'The top five: ai (342), generativeai (300), llms (287), openai (86), chatgpt (78). I’ve written a lot about this stuff! I grabbed a screenshot of my Plausible analytics for the year, fed that to ChatGPT Vision, told it to extract the data into a table, then got it to mix in entry titles (from a SQL query it wrote) and produced this table with it. Here are my top entries this year by amount of traffic: Article Visitors Pageviews Bing: “I will not harm you unless you harm me first” 1.1M 1.3M Leaked Google document: “We Have No Moat, And Neither Does OpenAI” 132k 162k Large language models are having their Stable Diffusion moment 121k 150k Prompt injection: What’s the worst that can happen? 79.8k 95.9k' - 'Your browser does not support the audio element. OpenAI aren’t the only group with a multi-modal audio model. Google’s Gemini also accepts audio input, and the Google Gemini apps can speak in a similar way to ChatGPT now. Amazon also pre-announced voice mode for Amazon Nova, but that’s meant to roll out in Q1 of 2025. Google’s NotebookLM, released in September, took audio output to a new level by producing spookily realistic conversations between two “podcast hosts” about anything you fed into their tool. They later added custom instructions, so naturally I turned them into pelicans: Your browser does not support the audio element.' - source_sentence: What is the most important factor in determining the quality of a trained model according to the context? sentences: - 'Intuitively, one would expect that systems this powerful would take millions of lines of complex code. Instead, it turns out a few hundred lines of Python is genuinely enough to train a basic version! What matters most is the training data. You need a lot of data to make these things work, and the quantity and quality of the training data appears to be the most important factor in how good the resulting model is. If you can gather the right data, and afford to pay for the GPUs to train it, you can build an LLM.' - 'On the other hand, as software engineers we are better placed to take advantage of this than anyone else. We’ve all been given weird coding interns—we can use our deep knowledge to prompt them to solve coding problems more effectively than anyone else can. The ethics of this space remain diabolically complex In September last year Andy Baio and I produced the first major story on the unlicensed training data behind Stable Diffusion. Since then, almost every major LLM (and most of the image generation models) have also been trained on unlicensed data.' - 'I also gave a bunch of talks and podcast appearances. I’ve started habitually turning my talks into annotated presentations—here are my best from 2023: Prompt injection explained, with video, slides, and a transcript Catching up on the weird world of LLMs Making Large Language Models work for you Open questions for AI engineering Embeddings: What they are and why they matter Financial sustainability for open source projects at GitHub Universe And in podcasts: What AI can do for you on the Theory of Change Working in public on Path to Citus Con LLMs break the internet on the Changelog Talking Large Language Models on Rooftop Ruby Thoughts on the OpenAI board situation on Newsroom Robots' pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l results: - task: type: information-retrieval name: Information Retrieval dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy@1 value: 0.9166666666666666 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 1.0 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 1.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 1.0 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9166666666666666 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3333333333333333 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.20000000000000004 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.10000000000000002 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.9166666666666666 name: Cosine Recall@1 - type: cosine_recall@3 value: 1.0 name: Cosine Recall@3 - type: cosine_recall@5 value: 1.0 name: Cosine Recall@5 - type: cosine_recall@10 value: 1.0 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9692441461309548 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9583333333333334 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.9583333333333334 name: Cosine Map@100 --- # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("jfrost10/legal-ft-9ed0fe19-8072-40cd-95af-56242e6565ce") # Run inference sentences = [ 'What is the most important factor in determining the quality of a trained model according to the context?', 'Intuitively, one would expect that systems this powerful would take millions of lines of complex code. Instead, it turns out a few hundred lines of Python is genuinely enough to train a basic version!\nWhat matters most is the training data. You need a lot of data to make these things work, and the quantity and quality of the training data appears to be the most important factor in how good the resulting model is.\nIf you can gather the right data, and afford to pay for the GPUs to train it, you can build an LLM.', 'I also gave a bunch of talks and podcast appearances. I’ve started habitually turning my talks into annotated presentations—here are my best from 2023:\n\nPrompt injection explained, with video, slides, and a transcript\nCatching up on the weird world of LLMs\nMaking Large Language Models work for you\nOpen questions for AI engineering\nEmbeddings: What they are and why they matter\nFinancial sustainability for open source projects at GitHub Universe\n\nAnd in podcasts:\n\n\nWhat AI can do for you on the Theory of Change\n\nWorking in public on Path to Citus Con\n\nLLMs break the internet on the Changelog\n\nTalking Large Language Models on Rooftop Ruby\n\nThoughts on the OpenAI board situation on Newsroom Robots', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9167 | | cosine_accuracy@3 | 1.0 | | cosine_accuracy@5 | 1.0 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.9167 | | cosine_precision@3 | 0.3333 | | cosine_precision@5 | 0.2 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.9167 | | cosine_recall@3 | 1.0 | | cosine_recall@5 | 1.0 | | cosine_recall@10 | 1.0 | | **cosine_ndcg@10** | **0.9692** | | cosine_mrr@10 | 0.9583 | | cosine_map@100 | 0.9583 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 156 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 156 samples: | | sentence_0 | sentence_1 | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 21.13 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 135.15 tokens</li><li>max: 214 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:-------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>What was the typical context length accepted by most models last year?</code> | <code>Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable exception of Claude 2.1 which accepted 200,000. Today every serious provider has a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.</code> | | <code>How many tokens can Google’s Gemini series accept in its models?</code> | <code>Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable exception of Claude 2.1 which accepted 200,000. Today every serious provider has a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.</code> | | <code>What factors contributed to the crash in LLM prices according to the context?</code> | <code>The GPT-4 barrier was comprehensively broken<br>Some of those GPT-4 models run on my laptop<br>LLM prices crashed, thanks to competition and increased efficiency<br>Multimodal vision is common, audio and video are starting to emerge<br>Voice and live camera mode are science fiction come to life<br>Prompt driven app generation is a commodity already<br>Universal access to the best models lasted for just a few short months<br>“Agents” still haven’t really happened yet<br>Evals really matter<br>Apple Intelligence is bad, Apple’s MLX library is excellent<br>The rise of inference-scaling “reasoning” models<br>Was the best currently available LLM trained in China for less than $6m?<br>The environmental impact got better<br>The environmental impact got much, much worse</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `num_train_epochs`: 10 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | cosine_ndcg@10 | |:-----:|:----:|:--------------:| | 1.0 | 16 | 0.9638 | | 2.0 | 32 | 0.9484 | | 3.0 | 48 | 0.9539 | | 3.125 | 50 | 0.9539 | | 4.0 | 64 | 0.9539 | | 5.0 | 80 | 0.9484 | | 6.0 | 96 | 0.9846 | | 6.25 | 100 | 0.9846 | | 7.0 | 112 | 0.9692 | | 8.0 | 128 | 0.9692 | | 9.0 | 144 | 0.9692 | | 9.375 | 150 | 0.9692 | | 10.0 | 160 | 0.9692 | ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 4.1.0 - Transformers: 4.51.3 - PyTorch: 2.6.0+cu124 - Accelerate: 1.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
fffanx/Llama-3.2-1B-Instruct-GRPO-agent14_E13
fffanx
2025-05-05T00:42:29Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:42:01Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent14_E13 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent14_E13 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent14_E13", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent13_E13
fffanx
2025-05-05T00:41:58Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:41:29Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent13_E13 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent13_E13 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent13_E13", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent11_E13
fffanx
2025-05-05T00:40:54Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:40:26Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent11_E13 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent11_E13 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent11_E13", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_emotion_naive_outcome_0_01_0_1_seed_1_MC
gradientrouting-spar
2025-05-05T00:40:50Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:40:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fffanx/Llama-3.2-1B-Instruct-GRPO-agent8_E13
fffanx
2025-05-05T00:39:18Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:38:48Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent8_E13 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent8_E13 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent8_E13", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent6_E13
fffanx
2025-05-05T00:38:13Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:37:44Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent6_E13 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent6_E13 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent6_E13", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
koussayyyy/qwen_testcase_model
koussayyyy
2025-05-05T00:37:55Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:Qwen/Qwen2.5-7B", "base_model:adapter:Qwen/Qwen2.5-7B", "license:apache-2.0", "region:us" ]
null
2025-05-04T23:25:35Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-7B tags: - generated_from_trainer model-index: - name: qwen_testcase_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qwen_testcase_model This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
mradermacher/TEMPURA-Qwen2.5-VL-3B-s1-GGUF
mradermacher
2025-05-05T00:37:07Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "en", "dataset:andaba/TEMPURA-VER", "base_model:andaba/TEMPURA-Qwen2.5-VL-3B-s1", "base_model:quantized:andaba/TEMPURA-Qwen2.5-VL-3B-s1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-05T00:30:05Z
--- base_model: andaba/TEMPURA-Qwen2.5-VL-3B-s1 datasets: - andaba/TEMPURA-VER language: - en library_name: transformers quantized_by: mradermacher tags: - text-generation-inference --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/andaba/TEMPURA-Qwen2.5-VL-3B-s1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s1-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s1.Q2_K.gguf) | Q2_K | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s1-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s1.Q3_K_S.gguf) | Q3_K_S | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s1-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s1.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s1-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s1.Q3_K_L.gguf) | Q3_K_L | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s1-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s1.IQ4_XS.gguf) | IQ4_XS | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s1-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s1.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s1-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s1.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s1-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s1.Q5_K_S.gguf) | Q5_K_S | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s1-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s1.Q5_K_M.gguf) | Q5_K_M | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s1-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s1.Q6_K.gguf) | Q6_K | 2.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s1-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s1.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/TEMPURA-Qwen2.5-VL-3B-s1-GGUF/resolve/main/TEMPURA-Qwen2.5-VL-3B-s1.f16.gguf) | f16 | 6.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
fffanx/Llama-3.2-1B-Instruct-GRPO-agent2_E13
fffanx
2025-05-05T00:36:00Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:35:31Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent2_E13 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent2_E13 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent2_E13", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent1_E13
fffanx
2025-05-05T00:35:28Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:34:48Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent1_E13 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent1_E13 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent1_E13", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
annemiekebickleyoy/ea6ee32d-888e-49d4-86d9-56cb1a79f213
annemiekebickleyoy
2025-05-05T00:33:16Z
0
0
transformers
[ "transformers", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:32:30Z
--- library_name: transformers model_name: annemiekebickleyoy/ea6ee32d-888e-49d4-86d9-56cb1a79f213 tags: - generated_from_trainer licence: license --- # Model Card for annemiekebickleyoy/ea6ee32d-888e-49d4-86d9-56cb1a79f213 This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/pc-agent-7b-GGUF
mradermacher
2025-05-05T00:31:58Z
0
0
transformers
[ "transformers", "gguf", "llama-factory", "full", "generated_from_trainer", "en", "base_model:henryhe0123/pc-agent-7b", "base_model:quantized:henryhe0123/pc-agent-7b", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-05T00:22:15Z
--- base_model: henryhe0123/pc-agent-7b language: - en library_name: transformers license: other quantized_by: mradermacher tags: - llama-factory - full - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/henryhe0123/pc-agent-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/pc-agent-7b-GGUF/resolve/main/pc-agent-7b.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/pc-agent-7b-GGUF/resolve/main/pc-agent-7b.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/pc-agent-7b-GGUF/resolve/main/pc-agent-7b.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/pc-agent-7b-GGUF/resolve/main/pc-agent-7b.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/pc-agent-7b-GGUF/resolve/main/pc-agent-7b.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/pc-agent-7b-GGUF/resolve/main/pc-agent-7b.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pc-agent-7b-GGUF/resolve/main/pc-agent-7b.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/pc-agent-7b-GGUF/resolve/main/pc-agent-7b.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/pc-agent-7b-GGUF/resolve/main/pc-agent-7b.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/pc-agent-7b-GGUF/resolve/main/pc-agent-7b.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/pc-agent-7b-GGUF/resolve/main/pc-agent-7b.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/pc-agent-7b-GGUF/resolve/main/pc-agent-7b.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
fffanx/Llama-3.2-1B-Instruct-GRPO-agent19_E12
fffanx
2025-05-05T00:27:22Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:26:44Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent19_E12 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent19_E12 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent19_E12", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent18_E12
fffanx
2025-05-05T00:26:41Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:26:12Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent18_E12 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent18_E12 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent18_E12", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent15_E12
fffanx
2025-05-05T00:25:04Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:24:36Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent15_E12 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent15_E12 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent15_E12", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent12_E12
fffanx
2025-05-05T00:23:31Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:23:01Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent12_E12 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent12_E12 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent12_E12", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Romain-XV/00f20144-4072-4d19-a3fd-acd9d9fe3430
Romain-XV
2025-05-05T00:23:04Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "unsloth", "conversational", "arxiv:2305.18290", "base_model:unsloth/Qwen2-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T00:00:07Z
--- base_model: unsloth/Qwen2-0.5B-Instruct library_name: transformers model_name: 00f20144-4072-4d19-a3fd-acd9d9fe3430 tags: - generated_from_trainer - axolotl - dpo - trl - unsloth licence: license --- # Model Card for 00f20144-4072-4d19-a3fd-acd9d9fe3430 This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Romain-XV/00f20144-4072-4d19-a3fd-acd9d9fe3430", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/romain_fnc-xventures/Gradients-On-Demand/runs/zymg8dxx) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent11_E12
fffanx
2025-05-05T00:22:58Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:22:29Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent11_E12 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent11_E12 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent11_E12", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent10_E12
fffanx
2025-05-05T00:22:26Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:21:57Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent10_E12 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent10_E12 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent10_E12", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Johnkopler/Newbie
Johnkopler
2025-05-05T00:22:25Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-05T00:22:25Z
--- license: apache-2.0 ---
fffanx/Llama-3.2-1B-Instruct-GRPO-agent7_E12
fffanx
2025-05-05T00:20:51Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:20:22Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent7_E12 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent7_E12 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent7_E12", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
YOYO-AI/EVA-QwQ-32B-Q4_K_M-GGUF
YOYO-AI
2025-05-05T00:20:49Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:YOYO-AI/EVA-QwQ-32B", "base_model:quantized:YOYO-AI/EVA-QwQ-32B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-05T00:19:22Z
--- base_model: YOYO-AI/EVA-QwQ-32B library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # YOYO-AI/EVA-QwQ-32B-Q4_K_M-GGUF This model was converted to GGUF format from [`YOYO-AI/EVA-QwQ-32B`](https://huggingface.co/YOYO-AI/EVA-QwQ-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/YOYO-AI/EVA-QwQ-32B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo YOYO-AI/EVA-QwQ-32B-Q4_K_M-GGUF --hf-file eva-qwq-32b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo YOYO-AI/EVA-QwQ-32B-Q4_K_M-GGUF --hf-file eva-qwq-32b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo YOYO-AI/EVA-QwQ-32B-Q4_K_M-GGUF --hf-file eva-qwq-32b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo YOYO-AI/EVA-QwQ-32B-Q4_K_M-GGUF --hf-file eva-qwq-32b-q4_k_m.gguf -c 2048 ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent4_E12
fffanx
2025-05-05T00:19:13Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:18:45Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent4_E12 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent4_E12 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent4_E12", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent2_E12
fffanx
2025-05-05T00:18:07Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T00:17:39Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent2_E12 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent2_E12 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent2_E12", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```